title

How to Choose a CCD Camera

    There's a lot to consider when choosing a CCD camera, and it can be a confusing process.   As described below, your choice of camera will partly depend upon the quality of your imaging site, as well as your telescope.  This is not meant to be a comprehensive review of the subject, but if you read through it carefully, I think that you will have a good understanding of the more important issues.

    Note- Although this section was written several years ago for CCD cameras, many of the concepts below apply to CMOS cameras as well.  However, information that is CMOS-specific (like adjusting gain to achieve a desired read noise and dynamic range) is not included, but I hope to add this information shortly.

Pixel Size
    Image resolution is largely related to a characteristic of your system known as "image scale," expressed as arcseconds per pixel.  For simplicity's sake, we are ignoring the influence of your optics and tracking on image resolution, although these factors are obviously important as well.  The image scale of your system is dependent upon only two factors- your CCD camera's pixel size, and your telescope's effective focal length.  A low number for image scale (like 0.5 arcsec/pixel) means high resolution, and a high number for image scale (like 10 arcsec/pixel) means lower resolution.  This makes sense- if 10 arcsec worth of data are represented by only one pixel, you have essentially crammed all of that detail into one point!  Calculation of image scale (in units of arcsec/pixel) is easy:  206 x pixel size (in microns) / effective focal length (in mm).


    Regardless of your image scale, however, the best resolution that you can achieve is still limited by your seeing conditions (as well as optics and mount).  For instance, in my imaging location, seeing is typically around 3.5 arcseconds FWHM (full width at half maximum, which refers to the width of the star profile measured at 1/2 the maximum pixel intensity of that star).  This means that the best that I could resolve is only around 3.5"/pixel.   So what is the best image scale to aim for (in other words, what pixel size should we want in our CCD camera, for a given focal length telescope)?  Surprisingly, it's not 3.5"/pixel at my site.  The Nyquist theorem suggests that in order to efficiently record this information and convert it into digital format, our system should be sampling the image more aggressively, by operating at an image scale of about 1/3 times the seeing, or in this example 1/3 x 3.5", or 1.17"/pixel.  (Actually, it should be 1/3.3 times the seeing, but who's counting).   This means that my CCD camera/scope combination should ideally have an image scale of 1.17"/pixel, in order to take full advantage of my (suboptimal) seeing conditions and produce a final resolution of approximately 3.5"/pixel. This places a lower limit on the image scale that we should aim for with our CCD camera/scope combination.  Anything lower than this is wasted effort for our seeing conditions, and in fact makes imaging more difficult.  In our example above, anything lower than about 1.17"/pixel will be associated with less sensitivity (smaller pixels) and greater potential to reveal guiding errors.  Note that for better seeing conditions, the optimal image scale will be different.  So if my seeing is 2" FWHM, I should choose my camera/scope combination to produce an image scale of about 1/3 times 2", or 0.66 arcsec/pixel.  For most of us who don't live in areas of great seeing, an average value for seeing on a decent night will be in the 3-3.5" FWHM range.  So for practical purposes, a lower limit for image scale of about 1"/pixel is a perfectly reasonable value to keep in mind.  But what about an upper limit?


    To answer this, remember that the Nyquist theorem tells you what the ideal image scale should be, but it doesn't mean that a less ideal image scale is necessarily bad.   Take a look at the images on my website and realize that the image scale for the Sky90 f4.5 photos is about 3.26"/pixel, and that the image scale for many of the FS102 f6 photos is about 2.29"/pixel, whereas the ideal image scale according to Nyquist would be around 1.17"/pixel (with seeing of 3.5" FWHM at my site).  In other words, essentially all of the images on my website are referred to as "undersampled."  Needless to say, I don't lose any sleep over this.  Although it's true that the FS102 images at an image scale of 2.29"/pixel show more detail, the Sky90 images at an image scale of 3.26"/pixel are pretty nice too.  A good rule of thumb for most imaging sites and objects is this:  an upper limit image scale of about 3.5"/pixel is a safe bet for producing nice images with reasonable detail.  Higher than this (e.g., 5-10 arcsec/pixels) will produce "softer" images lacking in finer detail (although this may be perfectly acceptable for wider field views if the imager is only interested in showing large scale structure).  And as mentioned above, going below an image scale of 1-1.5 arcsec/pixels for most sites will not produce better images, since your resolution will be seeing-limited.  Thus, there is some flexibility in the image scale, with a range of 1-3.5 "/pixel being safe for most of our average seeing conditions.  Obviously, the rules are different if you are lucky enough to be imaging in New Mexico.

Dynamic Range
    Dynamic range refers to the difference between the brightest and the faintest regions of an image that can be simultaneously recorded by the CCD camera.  The larger the difference, the better you will be able to capture faint signal in your image without blowing out (clipping) the highlights.  For a CCD camera, dynamic range can be estimated by dividing Full Well Capacity (in electrons) by the Read Noise (RMS electrons). Take the U32 camera as an example. The full well capacity of my camera is 55,000 electrons, and my read noise is about 8 RMS electrons (I measured both parameters for my specific camera), yielding a dynamic range of 6,875. The read noise is the denominator since it represents the smallest possible signal that can be captured by the camera- you can't capture less than that, because it would be buried in the noise (see below for more information about read noise). Another valid way of looking at this dynamic range metric (6,875 in my example) is that it represents how many steps above the read noise floor the CCD chip is capable of recording (since the read noise represents the smallest increment that can be resolved by the camera). If you calculate the dynamic range of other cameras, many will be lower than 6,875 steps, and some will be higher. If it's lower, say around 3000 steps, this could be a potential problem, especially if you image in a relatively light polluted site (like most of us).  For instance, if your light pollution takes up 50% of your well depth, you will have more room "left over" for actual signal if you are starting with 6875 steps, as opposed to 3000 steps.   So a higher dynamic range is generally always better to have than a low dynamic range, meaning that you should choose a camera with a high full well capacity, low read noise, or both.

    Note that dynamic range of a CCD camera is a measure of two related properties- 1) the difference between the highest and lowest signal intensities, and 2) the number of steps captured between the highest and lowest intensities. However, it does NOT guarantee that all of those captured steps will be faithfully rendered in the process of converting the electron signal of each pixel into a digital read out (i.e., the analog to digital conversion).  Once captured, the ability to faithfully render those steps is related to another feature called "bit depth,"  which is a property of the camera's analog to digital converter.  The analog to digital converter of most good quality CCD cameras already operates at a bit depth of 16 bits, meaning that it can convert the analog signal into 2^16, or 65,536 digital steps. Since my camera's dynamic range can capture 6,875 steps (i.e., 55,000 divided by 8), the 16 bit depth of my AD converter (which is able to render 65,536 steps) is more than enough! Conversely, a bit depth of 12 would not be adequate for my camera, since it would only be able to render 2^12 or 4,096 steps, whereas my camera's dynamic range has captured 6,875 steps. From this description, it should be clear that dynamic range and bit depth are two different characteristics of the camera.  You need a high dynamic range to capture faint signal without blowing out the highlights, and to capture fine gradations of intensity ("steps") in the image.  However, you need a high bit depth to actually render all of those captured steps into a useable digital output. 

    A camera can have a high dynamic range but low bit depth, in which case you will not be taking full advantage of all of those fine gradations of intensity that the camera has captured. This wastes the dynamic range and is not optimal.  Likewise, a camera can have a low dynamic range but high bit depth, in which case you will certainly take advantage of the dynamic range, but there just won't be many steps available for the AD converter to render. I have gone into this in greater detail than necessary, but I find that there is continued confusion about the difference between dynamic range and bit depth and hope that this clarifies the issue. 

Dark Signal
    Every CCD camera generates a dark signal that varies with exposure time and chip temperature. A CCD chip works by converting incoming photons of visible light into electrons, which are stored in the pixels and later converted to a digital signal.  However, it turns out that electrons are not only produced by photons of visible light that strike the CCD chip.  Dark signal refers to electrons that are generated in the absence of light, as a result of heat produced by the CCD camera chip itself.  These "thermal electrons" create hot pixels which increase in intensity over the exposure duration, and which can be minimized by cooling the CCD chip.  Some chips (like the Sony "Exview" chip used in the SXV-H9 camera) have very low dark signal.  Most other chips like the Kodak series have enough dark signal to warrant dark frame calibration (meaning that the dark frame is subtracted from the light frame, in order to remove the hot pixels that represent the effects of thermal electrons).  Although it's nice to not have to worry about dark frame calibration, it's not a big deal either.  Almost all CCD cameras that use Kodak series chips will be temperature regulated, meaning that you can specify the desired chip temperature during imaging.  This allows you to generate a series of dark frames at the same temperature (and duration), to be used as a dark frame master for future images taken at the same temperature.  Creating a dark frame master library makes the process of dark frame calibration a relatively painless process.

Choice of Blooming (NABG) versus Anti-Blooming (ABG) Cameras
    Blooming is a phenomenon that occurs when electrons fill the well of a given pixel and spill over into adjacent pixels, causing a bright, vertical streak that destroys the data contained within those adjacent pixels.  CCD chips that bloom are called "non anti-blooming gate" chips (NABG) and are typical of many Kodak KAF series chips (although note that some KAF chips do have anti-blooming gates).  Anti-blooming chips do not have this problem.  They contain an "anti-blooming gate" (ABG) that bleeds off electrons before they can spill over into adjacent pixels.  ABG chips are typical of the Kodak KAI series and the Sony Exview series.  Sounds like we should all be using ABG chips to avoid blooming, right?  In order to appreciate why the choice isn't always so simple, take a look at the Quantum Efficiency (QE) curves of a NABG versus an ABG camera, and you will see the problem (QE curves are usually available on CCD vendor websites).  QE is a measure of how efficiently a chip converts photons to electrons.   Because the ABG technology takes up space in the pixel, less surface area is available for detecting photons.  Thus, the QE of ABG chips is comparatively quite low when compared to a NABG camera.   So how do we choose between an NABG camera that blooms but has greater sensitivity, versus an ABG camera without blooming but with lower sensitivity?  As explained below, the choice is largely dependent upon how long your subexposure times will be, and this in turn is dependent upon sky noise and read out noise.

    When taking a subexposure, we want to maximize signal and minimize noise (i.e., maximize the signal to noise ratio).  Noise is uncertainty in the true value of the pixel, which shows itself as variability in the results of a given measurement.  For instance, if we expose the chip to a constant light source for a fixed period of time, measure the number of photons being captured by the pixel, and repeat this 10 times in exactly the same way, we will not always get the same result!  The degree to which the results fluctuate is referred to as noise, and it can be quantified.   Noise is related to 3 main effects:

1) "Photon Noise" is a property of the signal from collected light (i.e., the desired signal plus any sky background).  Photons arrive in packets, at irregular intervals, and it's this unpredictability in arrival times that generates their noise, which is also referred to as shot noise;

2) "Dark Noise" is a property of the signal generated by thermal electrons (mentioned above);

3) "Read Noise" is another layer of variability in the signal that is introduced by the chip amplifier responsible for converting the analog signal (i.e., the electrons in each pixel well) into a digital signal that our image processing program can use.  In simple terms, if the pixel had 100 electrons to read out (and remember that this value itself is subject to the noise mentioned in points 1 and 2 above), then the analog to digital unit converter might read this out as 96 electrons instead of 100 (for example).  This extra layer of variability is referred to as read noise.

    Photon Noise is unavoidable and is largely contributed by sky background (light pollution).   The longer you expose, the more photon noise you will have, but the greater the chance of acquiring your desired signal.  So think of photon noise as a necessary evil.  Dark Noise is also unavoidable but can be minimized by cooling of the CCD chip. That brings us to Read Noise. As stated above, read noise is a fixed amount of noise that is caused by the ADU converter every time an image is downloaded from your camera into your computer. Every camera has a certain amount of read noise (some less than others- check the specifications).  In contrast, photon noise is mainly due to sky background and is dependent upon your imaging site.  At a given imaging site, sky background is proportional to the subexposure duration.  If your subexposure time is too short, the sky background noise will be minimal (and so will your desired signal), and your image will be dominated by read noise (your exposure is "read noise limited," which is not ideal).  Conversely, if your subexposure time is long enough, sky background noise is very large compared to your read noise, and you essentially drown out the effects of read noise.  Your image is said to be "photon noise limited," which is good.  In other words, by exposing your subs long enough so that the sky background noise overwhelms the read noise, you effectively minimize the influence of read noise in your image.  Once you reach a subexposure duration where the read noise contribution becomes less than 5-10% of the total noise in the image, there appears to be no major advantage to prolonging the duration of the subexposure further.  If you are interested in learning more about this, please check out my subexposure duration page for additional details.

    It follows that subexposure duration is largely dependent on two factors, sky background noise and read out noise.  We want to aim for a subexposure duration that will reduce the read noise contribution to about 5-10%.  John Smith has done some nice work in this area, and his subexposure calculator is very instructive to use.  I have also analyzed subexposure duration and provide an alternative subexposure calculator for this purpose.  So what does this have to do with the choice between NABG and ABG cameras?  Here are some "bottom line" observations that are useful to consider:


1. At a dark site, sky background noise is very low. In order to generate enough sky noise in an individual subexposure to drown out the read noise, the subexposure duration will therefore have to be quite long.  Subexposure times in the range of 30-60 minutes (unbinned) may be necessary at a dark site in order to get the read out noise contribution down to 5-10% for most CCD cameras.  At f ratios in the f4-f8 range, most NABG cameras will bloom like crazy during a 30-60 minute exposure, making this type of camera impractical for a dark site.  So a compromise has to be made for dark sites- because of the need for long subexposures (it's a need, not really a choice), an ABG camera is ideal in order to avoid blooming.  The lower QE of ABG cameras is accepted as a necessary evil. Most imagers would want a higher QE, but they accept the lower QE of an ABG camera in order to avoid the hassle of blooming.   The fact that they are imaging at a dark site makes up for the lower QE in most cases, since there is greater chance for detecting faint signals that are not drowned out by sky noise.

2. For the rest of us who image in relatively light polluted sites, sky noise is much higher. Therefore, subexposure times in the range of only 5-15 minutes are usually sufficient to drown out the read noise contribution to less than 10% of total noise (it's really true- crunch some numbers using John Smith's calculator to convince yourselves).  At my imaging site, where I've measured sky flux with the U32 on several different occasions, my typical subexposure times are in the range of 5 minutes for luminance, 8 minutes for RGB, and 10 minutes for Ha (6 nm bandpass), in order to achieve a read out noise contribution of around 10% or less.  These exposure durations were not chosen by accident- they are chosen to achieve a low read out noise contribution at my imaging site.  With relatively short subexposure times, the amount of blooming seen in a typical star field with a NABG camera is generally easy to manage with currently available software.  And given the need for shorter subs based upon sky noise (it's a need, not really a choice), one could make the argument that it's better to have a chip with a high QE, such as a NABG camera, in order to maximize signal during a relatively short subexposure.  It's ironic that the presence of light pollution makes a more sensitive NABG camera a viable option, whereas those under dark skies often must use a camera that is intrinsically less sensitive (ABG). Despite all of these considerations, it is not necessary to use a NABG camera just because your subs will be relatively short.  You could certainly choose an ABG for light polluted skies, realizing that you will need a longer cumulative exposure to compensate for the lower QE of such a camera.  Still, an ABG camera will permit you to take photos of objects such as the Pleiades and M42 without worrying about blooming (which will occur with a NABG, even at short exposures, for these types of bright targets).


Putting it all together

1.  Pixel size / Image scale:  If your seeing is average (applies to most of us), consider cameras with pixel sizes that will yield an image scale in the range of 1.0-3.5 arcsec/pixel. If you have a good mount that guides well, aim for the lower end of this image scale range in order to maximize resolution.  Much below 1.0"/pixel is usually wasted effort for most of us, since conditions are often seeing-limited.  A bit higher than 3.5"/pixel can be fine as well, as long as you don't mind a softer look to your images.  These are only general guidelines- don't get hung up on any of this.  You can produce a great astroimage even if you are not using the ideal Nyquist value for image scale, although staying within this general range (1.0-3.5 arcsec/pixel) is a good idea for most of us. These rules don't apply for those with great seeing, where optimal image scales would be well under 1.0 arcsec/pixel.

2.  Dynamic range:  All things being equal, get a camera with a higher dynamic range (full well capacity divided by read noise).

3.  Dark signal:  Don't worry too much about this, given the quality of modern-day, cooled chips. If it's there, you will remove it with dark frames. If it's not, you won't. Certainly, if you have two cameras that are equivalent in all other important aspects (image scale, dynamic range, read noise, QE, etc.), then get the one that has the lower dark signal.  However, after reading this primer and looking at camera specifications, you will see that it's not always that simple.  A camera may have a higher dark signal, and yet have features such as better dynamic range and higher QE that make it a more attractive choice, despite the need to dark subtract.

4.  NABG versus ABG:  If you image at a reasonably light polluted site which requires short subexposures, you have a choice of either NABG or ABG cameras.  With NABG cameras, the QE will be higher, and the blooming will be manageable for most star fields over relatively short subexposure durations.  For me, a NAGB camera was a logical choice for my second camera, especially since I was interested in a chip with high Ha sensitivity, and since my subexposure times would be short at my imaging site. I've been pleased with the Ha sensitivity of the KAF3200 chip (U32 camera), and it's related in large part to the NABG feature.  If you are just starting out and plan to take lots of photos of bright star clusters like the Pleiades, then an ABG camera is perhaps the better choice, since you won't have to deal with blooms (my first camera was the SXV-H9, which is ABG).  You can always take longer cumulative exposures to compensate for the lower QE of an ABG camera, if you have the time and patience. If you are at a darker site where you must use longer subexposures, an ABG camera is the way to go.

5. Chip size:  The CCD chip dimensions and your scope's focal length will dictate the field of view (FOV) of your images.  Ron Wodaski's calculator mentioned above will provide the field of view for a given camera/scope combination.  The calculation is easy:  FOV (in degrees) = 57 x CCD chip dimension (in mm) / effective focal length (in mm).  However, a bigger chip is not always better.  Remember that with a larger chip you will be more likely to have problems with 1) field curvature if your scope does not provide a flat field over the chip's entire surface area, 2) vignetting, 3) camera sag/flexure due to increased camera weight, resulting in non-orthogonality of the chip to the optical axis, which introduces optical aberrations at the edge of the field (do not underestimate the frustration that can occur due to this last point).  If you have to crop out a significant amount of the image due to oblong stars or severe vignetting at the periphery of your field, you would have been better off with a smaller-sized chip in the first place (less money, less frustration, smaller files, etc.).  Make sure that you talk with the telescope vendor, or e-mail astroimagers who are using specific types of telescopes, to determine whether a given scope will support the chip size of the camera that you are interested in.
 

6. Monochrome versus one-shot color:  I didn't discuss this above, but will mention it briefly now. One-shot color CCD cameras have lower sensitivity and resolution compared to monochrome cameras.  The Bayer matrix present in one-shot color cameras is responsible for this problem, and there are many websites that discuss this issue in great detail.  However, it's possible to take nice images of relatively bright objects with one-shot color CCD cameras, and you wouldn't need to invest in a separate filter wheel and costly filters.   Note that one-shot CCD color cameras seem to be very susceptible to the effects of light pollution, and most require some type of LPS filter in the imaging train, which will further decrease sensitivity.  Most CCD imagers use monochrome cameras, but the one-shot color CCD camera is a viable alternative, as long as you realize the potential downsides (decreased resolution being the most important problem in my view).

     Steve


Copyright Steve Cannistra

Home