Choice of Software

This chapter describes the basic operation of some of the image processing software, currently available that is easy to use and produces great results. There is no one piece of software that offers all the best features required to process an image so it might be necessary to use more than one package to get the best results from the images that you capture. This tends to be the norm amongst imagers you will need to find which software you are comfortable using for particular operations to get the best results. It may be necessary to use different elements from a number of software packages to create the results you want to achieve.

Image Processing

Image processing is the technique of taking the images captured by the camera, and extracting every possible bit of information from them. This can be to produce visually stunning pictures, or enables the data within them to be used in scientific studies.

The images that come directly from the camera are likely to be noisy dark and only show bright detail and they will probably suffer from both electrical noise, hot pixels and optical effects such as, vignetting and dust spots. To combat these problems, it is necessary to employ a number of techniques in order to calibrate the image, which reduces the effects of these problems considerably and enables the extraction of more faint detail than would be possible otherwise, using a process commonly called stretching.

Different Types of Images

The following is a description of the different types of images that are used to produce a fully calibrated astronomical image:

Light Frames

These are the actual image frames taken of the objects of interest; they will contain some or usually all of the previously mentioned defects. It is these “lights” that need to be applied to the calibration frames using the image processing software. This process relies on the production of good quality calibration frames and understanding how you use them properly; it isn’t difficult but requires care and attention to detail to carry out successfully, ensuring that you get the best possible images from the data collected (Fig. 9.1).

Fig. 9.1
A spectacular nighttime image of the sky.

Typical light Frame

Calibration Frames

Calibration frames are basically images that are taken in such a way that they specifically contain the errors or effects that we are trying to remove from the images of an object. This then allows the use of the information contained within them to correct the actual image.

Calibration frames should, ideally be taken with the optical and camera system, exactly as it was when it was used to take the actual images being calibrated. This is why it is suggested that they are taken at the end of an imaging run. This is not usually convenient as the extra time needed to produce the calibration frames may not be available late into the night. Just produce the calibration frames when you can without moving the focuser or taking the camera off the telescope if possible. Set the camera at the same temperature that the images were taken at for the dark frames as they are temperature dependent, ensuring that stray light will not interfere with them. For the flat frames the temperature is not important but the focuser position should be the same as when the images were taken. It therefore, does makes sense to take these calibration frames at the end of an imaging session if at all possible especially if using a standard DSLR, where you have no control over temperature. If you are using a dedicated camera with set point cooling or a DSLR with a cooler fitted this isn’t as much of a problem.

In order for calibration frames to work properly, take many calibration frames and let the software combine them and then apply them to get the end result which is a cleaner image with reduced noise and hot pixels and also a flatter field or background that can be stretched to show more detail.

Dark Frames

Dark frames are simply images that are taken at the same temperature and exposure duration as the images but with all the covers on the telescope. This produces an image which contains the electrical noise inherent in the camera system, and also the difference in sensitivity between all the pixels in the imaging chip in the camera. The dark frames are then used to even out these differences in sensitivity and reduce noise. It can make things easier if you decide on set standard exposure times for your images. It is recommended to use 5 min and 10 min and occasionally 20 min and then make matching dark fames that can be applied to them (Fig. 9.2).

Fig. 9.2
A photograph of a dark background.

Typical Dark frame

Flat Frames or Flats

Flat frames are images which pick up undesired effects in the telescope system such as vignetting which is usually a darkening of the image towards the edges of the frame, and also dust marks which show up as doughnut shapes in the image. Interestingly, the size of the doughnut is directly related to its proximity to the imaging sensor, this can be useful to determine which optical component the dust is on. When correctly taken flats are applied to an image the effect that they have is to flatten the effects of vignetting. This makes the background even across the image and they also remove dust spots resulting in a much cleaner image which is capable of being stretched to show more detail.

Flats work by taking an image of an evenly illuminated light source and using it to calibrate the actual images. This is done by dividing the pixel values in the light frames by the values of the flat frames and results in an even illumination of the image frame with dust doughnuts and vignetting removed. Flat frames are not temperature dependent as some of the other calibration frames are. Flat frames do need to be taken for each filter used and then the appropriate flat frame is used with images taken with that filter (Fig. 9.3).

Fig. 9.3
A photograph of a light-shaded background.

Typical Flat frame

Producing Flat Frames

Flat frames are usually taken using one of two methods sky flats and light box flats. The method is not important and is simply a matter of convenience and whichever is the easiest to produce. There are two main considerations when taking flats; the first is using an even light source and the second consideration when producing your flat frames, is the exposure time used to produce them.

Sky Flats

Sky flats are usually taken at dusk or dawn and are images taken of a blank area of sky or cloud usually with a white T-Shirt or some form of diffuser stretched over the telescope aperture. The diffuser is needed as the aim is to take an image of any blemishes such as dust spots and the vignetting in the optical system, which is a darkening towards the edges of the field. This produces an image which maps any vignetting and shows dust spots. When this flat is applied to an actual object image it cancels out the vignetting and dust spots (Fig. 9.4).

Fig. 9.4
A photograph of a sky-flat setup on a rooftop. The tip of the dew shield of the sky flat is covered with a cloth.

Sky flat setup

Light Box Flats

Light box flats are flats that have been taken by putting an evenly illuminated light box over the telescope aperture and shooting the flat frames. These light boxes are available commercially, or are quite easily made at home with relatively few tools, or even adapting a flat panel ceiling light. The advantage of using a Light box to produce flats is that you can take your flat frames anytime as long as there is no other light leaking into your telescope tube, which would make them ineffective.

No matter what types of flat frames have been produced they are applied to the image in the same way and serve exactly the same purpose (Fig. 9.5).

Fig. 9.5
A photograph of a sky flat light box setup on aopen area. The tip of the dew shield of the sky flat is covered with a rectangular-shaped object.

Light box flat set up

Hints for Taking Good Flat Frames

It does not matter if you take light box flats or sky flats; the following basic rules still apply:

  • The camera should still be connected to the telescope.

  • The focus and rotation of the camera needs to be unchanged from when the original images were taken for use with the flats.

  • Around 21 flat frames are needed to get good results.

  • The temperature at which you take flat frames is not important.

Producing Flat Frames with a DSLR

To produce good flat frames with a DSLR you need to shoot them at the same ISO setting as your image or light frames were taken.

  • Set the camera to aperture priority mode. Using this mode the camera will calculate the best exposure time to use when shooting the flat frames. Check the camera histogram to ensure that the flat frame is not under or over exposed.

Producing Flat Frames with a CCD or CMOS Astronomy Camera

Producing good flat frames with a dedicated astronomy camera takes a little more work, but the benefit is high quality images.

CCD and CMOS Camera Automatic Flat Frame Production

Programs like NINA and APT have tools to help create good flat frames and are worth using as they reduce the work. Enter the desired ADU number which is typically 25,000. It will be necessary to approximate the starting exposure length, usually in seconds. Also, set the minimum and maximum exposure times to try. The software will then determine the correct exposure value to use and create an imaging plan that will automate the process.

Producing Manual Flats

It is also possible to produce the flats manually without the use of the software described.

The following list details helpful facts about the image histogram needed in order to produce the flats.

  • The value of white in an image using a 16 bit camera is 65,536.

  • The value of black in an image using a 16 bit camera is 0.

  • The average ADU value of a typical good flat frame varies from camera to camera but a good starting point is 25,000 to 30,000 ADUs. As a rough guide, the ADU level of the flats should be approximately 1/3 to 1/2 the maximum ADU level of the camera, and when looking at the histogram it will appear approximately symmetrical.

  • It will be necessary to take various exposures and the starting point depends on the brightness of the light box or the sky. From this first exposure, use the histogram in the image capture program and utilize its statistics tool to determine if the exposure taken is too long or too short by looking at the ADU value. By repeating this process and making changes to the exposure time it is possible to find the correct exposure needed for the flats.

Bias Frames

Bias frames are frames that are taken like dark frames with all the covers on but use the shortest exposure time that is available. The intention is to pick up the electrical noise in the camera imaging chip and amplifiers and subtract that to get a cleaner image. These are only very slightly temperature dependent for most cameras (Fig. 9.6).Footnote 1

Fig. 9.6
A photo of a light-shaded frame.

Typical Bias Frame

Applying Calibration Frames to Your Images

Once you have a set of calibration frames they are applied to your images using your chosen processing software; this is done before any stacking or stretching takes place as the calibration frames are applied to the raw data from the camera individually. Once this has been done then the resulting calibrated images are ready to be stacked and should be free from vignetting, dust bunnies, hot pixels and the noise within the image reduced.

Aligning and Stacking

Once images have been calibrated, they will need to be aligned and stacked. In most image processing packages this is treated as a combined operation in which the chosen settings are carried on all your selected images and at the end, a single aligned and stacked image is produced and saved.

Aligning Images

Aligning images is the process of making sure that all the objects in each image are in register with all the others correctly so that when they are stacked together the resulting image retains all the detail. The alignment process may simply require images to be aligned in an X, Y direction or there may be a need to rotate images so that they are aligned. This will largely occur when the telescope has done a meridian flip part way through an imaging run or when images to be aligned are taken on different nights. Scale can be a factor if images from different telescopes and cameras are to be aligned with each other.

Astronomical image processing packages usually have automatic and manual modes of aligning your images.

Automatic Image Alignment

Fully automatic image alignment is a case of loading your images and the software calculates the alignment of the images including working out any X, Y movement, rotation and rescaling.

Stacking

The resulting images are then added together using a process called stacking to produce a final image that is then capable of being stretched in order to show the maximum possible detail from the images that have been captured (Fig. 9.7).

Fig. 9.7
A composite image of the stacking. The camera frames were combined to create the final image, which was then stretched.

Illustration of stacking

Common Methods of Stacking and Alignment

Stacking is the process used to ensure that when multiple images are combined it is done in such a way that all the features in each image are in register with each other. This preserves the detail in the image and helps reduce tracking errors between individual images.

No Alignment

This may be used in the unlikely event that all your images to be stacked are in perfect registration with each other or you have previously carried out the alignment.

Translation

Translation is used when there is only a vertical and horizontal direction shift between images to be stacked.

Translation Rotation and Scaling

Translation may also include rotation and scaling correction. These corrections may be used to stack images taken with more than one telescope or camera or where the telescope has carried out a meridian flip midsession resulting in some images being inverted.

Drizzle

Drizzle is a technique developed by Andrew Fruchter (STCI)Footnote 2 and Richard hook (STECF)Footnote 3 to ensure the best detail when processing Hubble space telescope images.

Stacking Functions

The following stacking functions may be available along with others depending on the stacking software you are using.

Average

This is a stacking function that is designed to further reduce noise in the final image.

Standard Deviation

Standard deviation is designed to help eliminate outlier pixel values such as hot pixels and even satellite trails which do not occur in every image.

The Advantages of Stacking

  • Stacking results in images that have reduced levels of noise. This enables finer detail and fainter objects to be seen.

  • Stacking enables you to use shorter individual exposure images which means if a single image is spoilt by bad tracking, wind catching the telescope or an aircraft going through the field of view only that frame is spoilt instead of a single very long exposure frame being spoilt.

  • The telescope tracking doesn’t have to be quite as accurate as a lot of short exposure images can be taken and added together.

  • The best images can be chosen from than those taken and the best ones can be chosen for stacking.

The Disadvantages of Stacking

  • Requires more storage space for a series of images on your computer.

  • Requires more steps and takes slightly more time to process your images.

There are many software packages that can be used for processing astrophotography images, some are free to use and others are commercial. Nebulosity 3 and 4, Astroart and Pixinsight all of which are commercial products are recommended.

Deep sky Stacker is widely used to do the calibration and stacking and then either Nebulosity or Pixinsight with a final noise reduction done in Adobe Lightroom or Photoshop. Workflow and image processing software is available to meet a range of needs.

Deepsky Stacker

This is free to use and has been specifically written to apply calibration frames to images and then stack them with very little intervention from the user. It does a fantastic job and is quick and easy to use. All the user needs to do is load the appropriate images into DSS using the Open Picture files section of the dialogue box, look through the images to check them for quality and make sure the tick box next to each of the images is ticked and press Stack checked images.

It then shows errors such as missing calibration files, offers recommended settings and allows you to alter stacking parameters using two buttons at the bottom of the dialogue box.

When the OK button is pressed it will proceed with the stack and display an upstretched stacked image which you may save with the Save picture to file button. Deepsky Stacker will also stitch together images that have taken as mosaics which can be very useful when an object is too big for the field of view of your telescope camera combination.

ASTAP

ASTAP is free software designed to plate solve images as well as stacking them and does a particularly good job of stacking mosaics automatically. ASTAP will also perform photometric measurements and automatically annotate images, as long as you load the appropriate database which is very easy to do. It also has a very useful tool for imagers called CCD inspector which takes the HFD value of the stars in an image and from these figures out and displays any sensor tilt and curvature of the field. This can be a real help in determining problems in an imaging system.

Once your images have been stacked follow the instructions of the chosen stacking software and load your image into image processing software such as Nebulosity, Pixinsight, Photoshop or Gimp to stretch it and bring out its detail. In order to use Photoshop or Gimp you will need to save it in a lossless image format that they can read such as TIFF.

It must be noted that Nebulosity and Pixinsight along with Astroart and Maxim DL are also capable of image calibration and stacking.

After the image has been calibrated and stacked, it is now ready to be stretched as it will still not be showing the full detail contained in the data you have captured and it will probably still appear to be quite dark.

Stretching an Image

Stretching an image allows scaling of all the information contained in that image so that it can be easily seen. Looking at an astronomical image straight from the camera it will usually appear to be very dark and have little detail showing. This needs to be addressed so that all the information contained in the image is visible.

Most software packages have an automated stretching routine which can be a good start if you are new to processing images. Whilst automatic stretching often does a good job, better results can be achieved by doing it manually (Fig. 9.8).

Fig. 9.8
A screenshot of Nebulosity. The histogram indicates a dark and downward trend, with the majority of the data at the bottom.

Histogram showing most of the data is at dark end. (Screenshot by permission of Nebulosity)

Looking at the graph, you will see the horizontal axis shows the value or brightness level of pixels in the image, and the vertical scale shows the number of pixels of that particular value. It can be seen from the first histogram picture above, that the number of pixels containing actual image information is quite small and usually bunched together forming a narrow peak. This is because most of the image is simply the blackness of space.

So in order to see the information that the image contains more easily rescaling of the information within the mage is necessary, so that it is distributed across the graph, which at one end shows black tones and at the other end shows white tones (Fig. 9.9).

Fig. 9.9
A screenshot of Nebulosity, indicates the display tab. The histogram illustrates the data stretched to fit the full width and follows an increasing trend in the beginning, and decreasing trend throughout.

Histogram showing data stretched to fit the full width of the histogram. (Screenshot by permission of Nebulosity)

The next histogram shows the result of stretching an image and contains data from the black end of the histogram to the white end. This means that when the image is displayed or printed, all of the information contained in it will be visible as either shades of grey or color tones.

Levels Control

The basic tools that are used to stretch an image manually tend to be level controls and curves, in level controls markers are moved on a histogram to show the start and end of the information. This is known as setting the black and white points; in astronomical images it not advised to move the white point as this will exaggerate any noise that is still in the image. There is also a midpoint marker which will scale the midtones and has a huge effect on the overall brightness of the image (Fig. 9.10).

Fig. 9.10
A screenshot of levels or power stretch tab. It displays a midpoint marker which will scale the midtones and has a huge effect on the overall brightness of the image.

Level graph. (Screenshot by permission of Nebulosity)

Curves Control

The next function that can be used is curves; this allows us to scale the image information in a nonlinear way which allows boosting of individual brightness levels to make them visible. It is implemented by manipulating a scaling curve over the histogram to get the desired effect and is a very powerful tool. The curves tool has to be used very carefully as it is easy to overstretch an image which can result in degradation by raising the level of noise which will make it visible (Fig. 9.11).

Fig. 9.11
A screenshot of the Bezier curves tab. The graph plots a diagonal line with 2 points marked on it. There are buttons for done, cancel, save, and zoom, along with pre-sets on the right side.

Curves Graph. (Screenshot by permission of Nebulosity)

Combination of Controls

Of course, when processing astronomical images there is no single solution and images will benefit from the use of both levels and curves.

Mono Images

If you are simply taking mono images, apart from using sharpening and noise reduction functions you may not need to do any more, other than save the final image. Saving captured images should be done in a lossless format such as FITS or TIFF from the original file; it is then possible to create JPEGS for sharing which are smaller in size. This way there is a backup master image containing all the information captured.

Color Images

If you are shooting with a OSC or DSLR camera you will also need to color balance the image as it is usual for there to be a color cast. This may be a tool to simply do an overall color balance or a tool to neutralize the background caused by light pollution. As with a mono image the next step would be to save the image in a lossless format such as FITS or TIFF as a master that can be used to create JPEGS for use in emails etc.

RGB Color Image from a Mono Camera

It takes a few more steps to produce a color image using separate filters and a mono camera, but the basic process is the same up to the point where the images are calibrated.

Remember to use flats taken with the correct filter for the file they are to be applied to; this way any dust bunnies present will be removed from the filters used.

The images will then need stacking according to the filter they have been taken with. So for an RGB image all the red filter frames will be stacked together, all the green filter images will be stacked together and all the blue filter images will be stacked together. This will produce a stacked red file a stacked green file and a stacked blue file.

So if using red, green and blue filters to produce a standard color image you will have a stacked red filter master image a stacked green filter master image and a stacked blue filter master image that need combining to produce a color image.

These master images will need to be aligned with each other as in the stacking process but saved individually and not stacked together. This is to ensure that when they are combined as the color channels that make up the final image, that they are all aligned with each other and will produce a properly registered color image without color fringes.

When this stacking and aligning has been completed the images are used as the color channels in your processing software to produce the final color image. The resulting image must then be stretched to produce the final image showing all the detail and structure that it contains.

Narrowband Images from a Mono Camera

Narrow band filters such as Ha OIII and SII can be used in place of R, G, B filters to produce an image that whilst not visually accurate is capable of showing lots of detail at these different wavelengths. The images are then combined in the same way that is used to make an RGB image, but this time as this is producing a false color image, there is a choice in the way the image is displayed. This is done by choosing which of the Red Green and Blue channels in the image we place the individual narrowband images in.

Hubble Palette

The Hubble telescope combines its narrow band images using what is called the Hubble palette which is designed to show as much detail as is possible. This is done by remapping the color channels which is the process of using an SII filter image in the standard red image channel, Ha Filter image in the green channel and OIII in the blue channel.

So for a Hubble palette image the channels are combined as follows:

  • SII filter image is put in the red channel

  • Ha filter image is placed in the green channel

  • OIII filter image is placed in the blue channel

This is simply done by loading the appropriate image into each of the color channel dialogue boxes used in your chosen software package and watching as it produces a color image. The image will be, as mentioned, a false color map of the object but will show considerable detail due to the interaction of the mixed color channels.

The Hubble palette is only a starting point and any combination or mix can be used in order to show the detail required.

Bi Color Images

It can be useful to produce images that are captured using only two filters, if the object to be imaged doesn’t contain information in all three channels. An example that works well with this technique is planetary nebulae. It is possible to achieve very successful color images of planetary nebulae only using two filters. This can be done by using Ha and OIII filters. The images need to be combined as follows:

The Ha image was placed in the red channel and The OIII image was placed in both the green and the blue channel. This produces an image which looks to have quite natural colors for the planetary nebula.

The Palette recommended for planetary nebulae using only two filters is:

Red:

image channel contains Ha

Green:

image channel contains OIII

Blue:

image channel contains OIII

Luminance Channel

A luminance channel is another image channel like red, green or blue and it is added to an image to enhance detail, brightness and contrast. It should be taken at the full resolution of the camera. An advantage of using a luminance channel is that the R, G, B, channels or Ha, OIII, or SII channels do not have to be taken at the same high resolution. This means that the other channels can be discarded in order to increase the camera sensitivity and hence reduce the exposure time needed. This works because the high resolution is supplied by the luminance channel is used for the fine detail and the other channels supply mainly the color.

Stacking Software

There are many software packages that are used to stack and process images, and one such widely used program used is called Deep Sky stacker.

Stacking with Deep Sky Stacker

  • In order to use deep sky stacker to stack your images you first have to ensure that you have your image files ready along with any calibration frames that you are going to use. This means the lights, darks, bias frames and flats.

  • The first step is to open your image files or lights as they are known (Fig. 9.12).

  • Using the menu on the left under registering and stacking load the images to be stacked, followed by the dark frames, the flat frames and the bias frames.

  • Once these have all been loaded make sure that they have a tick in the check box on the left next to each frame; this can be done using the check all menu option if you have looked through your files and are happy that they appear to be of good enough quality.

  • This is followed by pressing register checked pictures which will bring up the following menu (Fig. 9.13).

Fig. 9.12
A screenshot indicates the main window of Deep Sky Stacker. It consists of options for registering and stacking, pre-processing, and options, along with their respective drop-downs on the left side, and a blank layout on the right side.

Deep Sky Stacker Main Window. (Screenshot by permission of David Partridge)

Fig. 9.13
A screenshot of Deep Sky Stacker's main window denotes the Register pictures Dialogue box. The register settings dialogue box contains options for registering previously registered images, automatic detection of hot pixels, stack after registering, and so on.

Register pictures Dialogue. (Screenshot by permission of David Partridge)

  • Pressing the recommended settings button will show the settings that the program recommends you use along with a brief summary of what it is going to do.

  • Pressing stacking parameters will give you access to the manual settings available to you, for now however it is best to go with the suggested settings. So from the Register settings dialogue box press the OK button. This will show a stacking steps dialogue box which will show the stacking mode and alignment method to be used. It also shows the number of frames to be combined in the final image and the total exposure time of the frames to be used.

  • This is followed by RGB details in case it is a color image that you are creating. Further down there are details of the calibration frames that will be used.

  • Again there is the option of recommended settings and stacking parameter buttons press OK to accept the software’s recommended options.

  • The images taken will now have the calibration frames applied and be stacked according to the recommended settings. Finally, your calibrated and stacked image should appear like so (Fig. 9.14).

Fig. 9.14
A screenshot of the Stacked image Display window. It consists of an image of several dots against a dark background. Below the stacked image, The R G B slash K levels tab displays an increasing curve.

Stacked image Display window. (Screenshot by permission of David Partridge)

  • Below left of the image there is a histogram of the image information along with sliders denoting the black, white and mid tones that can be used to make the image data easier to see. The image can then be saved using the save picture to file option from the menu on the left of the screen. It is best to choose an image format that is lossless which means that it will not use a form of compression that might lose any of the original data within it. You now have an image which is ready to be further processed in order to bring out all the information contained within it. This is achieved by stretching the image using another piece of software.

Basic Technique for Stretching an Image Using Nebulosity

Nebulosity is a commercially available program that is very easy to use and has some very powerful features. The first step is to load the image into Nebulosity by using the File open file options from the menu at the top of the user screen.

Be aware of the histogram at the top right of the user interface; this has sliders for setting the black and white points and a check box to set these automatically. It has to be noted that this doesn’t affect the actual image data and is purely applied to what you see on the screen to enable you to see what the image contains whilst you are working on it. One big mistake to avoid that many new astrophotographers make is to darken the background of their images too much and by doing so they lose detail contained in that part of the image by what is called clipping the black point (Fig. 9.15).

Fig. 9.15
A Nebulosity screenshot includes a picture. The layout on the left side features an image of several white dots on a dark background, while the display settings tab on the right side includes options for duration, capture series, preview, and so on.

Opening an image in Nebulosity. (Screenshot by permission of Stark-Labs)

  • When the initial image is loaded, the first screen doesn’t look very promising but the data is hidden within this apparently featureless and dark image, all we have to do is make it visible by manipulating this data which is called stretching the image. This can be done quickly in Nebulosity by using one of its inbuilt tools called Digital Development Processing or (DDP), this can be found under the image menu (Fig. 9.16).

Fig. 9.16
A screenshot of Nebulosity displays a main image with a digital development settings tab, while the display setting tab on the right side indicates the options for the duration, capture series, preview, and so on.

Using DDP in Nebulosity. (Screenshot by permission of Stark-Labs)

  • The process starts by pressing the DPP button. This produces a stretched image and brings up a small slider dialogue box which contains the following controls.

Bkg:

This sets the level for the background in the output image.

Xover:

This sets a transformation point from a linear to a curved function.

B Power:

This slider is also used to darken the background.

Edge detail:

This smaller slider, at the right of the dialogue box, controls the amount of sharpening done during the DDP process.

  • Once you start to move these sliders around it becomes very easy to both appreciate what they are doing, because they work in real time, and helps you produce a very fast stretched image. Please ensure that you move the sliders as little as possible as many beginners spoil their images by being heavy handed in the processing which introduces noise and processing artifacts.

  • DDP might not produce the best image possible from the data in the image but it does give a fast and easy introduction to processing images which is necessary at this point as there is so much else to learn.

  • More advanced techniques can come later but the images produced using this quick easy method will be much admired by family, friends and other astrophotographers.

  • Nebulosity also has levels and curves controls which are available for a more manual control of processing images but this requires practice and care to get right

  • Nebulosity has tools for reducing noise in images, for sharpening and for combining images taken through filters to produce both full color and narrowband color images (Fig. 9.17).

Fig. 9.17
A screenshot of the Nebusity main window displays the drop-downs of the image option. It represents the drop-down options of the sharpen or blur image option.

Sharpening tools in Nebulosity. (Screenshot by permission of Stark-Labs)

Other Tools that Nebulosity Has that Are Worthy of a Mention Are

Synthetic flat field tool:

this tool is a useful alternative where flat frames are not available for calibrating an image as it will extract an approximation of a flat frame from the image and apply appropriately. This is not as good as using a real set of flat frames but is useful if needed.

LRG Color Synthesis:

This is the tool mentioned to combine mono images taken through filters to produce color images.

Sharpen Blur Image:

This submenu contains powerful tools to enable you to sharpen and blur images. Use the sharpening tools with care though as they can easily start to introduce noise into your image and unwanted artifacts like dark halos around stars.

Pixel Stats

This is a very useful dialogue box which enables you to see the pixel intensities min, max and mean ADU values for either the whole image or under the cursor. This is useful if you are producing flat frames and do not want to use one of the tools available in other programs, enabling you to achieve the correct pixel value for your camera so that the flats work effectively.

Saving Your Images

It makes sense to keep images somewhere secure as you may wish to refer to them in the future.

Saving Old Calibration Frames

It is a good idea to save processed image files and also the original image files and the calibration frames used to process them. Saving image files like this means that as you revisit some of the same objects again and again you will be able add more and more data to them. This will develop images that are the results of many hours work and very long total exposure times, so the images will record more and more faint detail. This can only be done successfully if the calibration images are saved as well as the actual images.

By now you should have a basic understanding of what image processing is, why the process is necessary and the basic steps involved. Please use this as a very basic starting point as this is a huge subject and needs the investment of considerable time and effort in order to produce the best images.