There's a lot to consider when choosing a CCD camera, and it can be a confusing process. As described below, your choice of camera will partly depend upon the quality of your imaging site, as well as your telescope. This is not meant to be a comprehensive review of the subject, but if you read through it carefully, I think that you will have a good understanding of the more important issues.
Although this section was written several years ago for CCD cameras,
many of the concepts below apply to CMOS cameras as well.
However, information that is CMOS-specific (like adjusting gain to
achieve a desired read noise and dynamic range) is not included, but I
hope to add this information shortly.
Dynamic range refers to the difference between the brightest and the faintest regions of an image that can be simultaneously recorded by the CCD camera. The larger the difference, the better you will be able to capture faint signal in your image without blowing out (clipping) the highlights. For a CCD camera, dynamic range can be estimated by dividing Full Well Capacity (in electrons) by the Read Noise (RMS electrons). Take the U32 camera as an example. The full well capacity of my camera is 55,000 electrons, and my read noise is about 8 RMS electrons (I measured both parameters for my specific camera), yielding a dynamic range of 6,875. The read noise is the denominator since it represents the smallest possible signal that can be captured by the camera- you can't capture less than that, because it would be buried in the noise (see below for more information about read noise). Another valid way of looking at this dynamic range metric (6,875 in my example) is that it represents how many steps above the read noise floor the CCD chip is capable of recording (since the read noise represents the smallest increment that can be resolved by the camera). If you calculate the dynamic range of other cameras, many will be lower than 6,875 steps, and some will be higher. If it's lower, say around 3000 steps, this could be a potential problem, especially if you image in a relatively light polluted site (like most of us). For instance, if your light pollution takes up 50% of your well depth, you will have more room "left over" for actual signal if you are starting with 6875 steps, as opposed to 3000 steps. So a higher dynamic range is generally always better to have than a low dynamic range, meaning that you should choose a camera with a high full well capacity, low read noise, or both.
Note that dynamic range of a CCD camera is a
two related properties- 1) the difference between the highest and
intensities, and 2) the number of steps captured between the highest
intensities. However, it does NOT guarantee that all of those captured
will be faithfully rendered in the process of converting the electron
each pixel into a digital read out (i.e., the analog to digital
conversion). Once captured, the ability to faithfully render
is related to another feature called "bit depth," which is a
property of the camera's analog to digital converter. The analog
digital converter of most good quality CCD cameras already operates at
depth of 16 bits, meaning that it can convert the analog signal into
65,536 digital steps. Since my camera's dynamic range can capture 6,875
(i.e., 55,000 divided by 8), the 16 bit depth of my AD converter (which
to render 65,536 steps) is more than enough! Conversely, a bit depth of
would not be adequate for my camera, since it would only be able to
or 4,096 steps, whereas my camera's dynamic range has captured 6,875
From this description, it should be clear that dynamic range and bit
two different characteristics of the camera. You need a high
range to capture faint signal without blowing out the highlights, and
capture fine gradations of intensity ("steps") in the image.
However, you need a high bit depth to actually render all of those
steps into a useable digital output.
A camera can have a high dynamic range but low bit depth, in which case you will not be taking full advantage of all of those fine gradations of intensity that the camera has captured. This wastes the dynamic range and is not optimal. Likewise, a camera can have a low dynamic range but high bit depth, in which case you will certainly take advantage of the dynamic range, but there just won't be many steps available for the AD converter to render. I have gone into this in greater detail than necessary, but I find that there is continued confusion about the difference between dynamic range and bit depth and hope that this clarifies the issue.
Every CCD camera generates a dark signal that varies with exposure time and chip temperature. A CCD chip works by converting incoming photons of visible light into electrons, which are stored in the pixels and later converted to a digital signal. However, it turns out that electrons are not only produced by photons of visible light that strike the CCD chip. Dark signal refers to electrons that are generated in the absence of light, as a result of heat produced by the CCD camera chip itself. These "thermal electrons" create hot pixels which increase in intensity over the exposure duration, and which can be minimized by cooling the CCD chip. Some chips (like the Sony "Exview" chip used in the SXV-H9 camera) have very low dark signal. Most other chips like the Kodak series have enough dark signal to warrant dark frame calibration (meaning that the dark frame is subtracted from the light frame, in order to remove the hot pixels that represent the effects of thermal electrons). Although it's nice to not have to worry about dark frame calibration, it's not a big deal either. Almost all CCD cameras that use Kodak series chips will be temperature regulated, meaning that you can specify the desired chip temperature during imaging. This allows you to generate a series of dark frames at the same temperature (and duration), to be used as a dark frame master for future images taken at the same temperature. Creating a dark frame master library makes the process of dark frame calibration a relatively painless process.
of Blooming (NABG) versus
Anti-Blooming (ABG) Cameras
Blooming is a phenomenon that occurs when electrons fill the well of a given pixel and spill over into adjacent pixels, causing a bright, vertical streak that destroys the data contained within those adjacent pixels. CCD chips that bloom are called "non anti-blooming gate" chips (NABG) and are typical of many Kodak KAF series chips (although note that some KAF chips do have anti-blooming gates). Anti-blooming chips do not have this problem. They contain an "anti-blooming gate" (ABG) that bleeds off electrons before they can spill over into adjacent pixels. ABG chips are typical of the Kodak KAI series and the Sony Exview series. Sounds like we should all be using ABG chips to avoid blooming, right? In order to appreciate why the choice isn't always so simple, take a look at the Quantum Efficiency (QE) curves of a NABG versus an ABG camera, and you will see the problem (QE curves are usually available on CCD vendor websites). QE is a measure of how efficiently a chip converts photons to electrons. Because the ABG technology takes up space in the pixel, less surface area is available for detecting photons. Thus, the QE of ABG chips is comparatively quite low when compared to a NABG camera. So how do we choose between an NABG camera that blooms but has greater sensitivity, versus an ABG camera without blooming but with lower sensitivity? As explained below, the choice is largely dependent upon how long your subexposure times will be, and this in turn is dependent upon sky noise and read out noise.
When taking a subexposure, we want to maximize signal and minimize noise (i.e., maximize the signal to noise ratio). Noise is uncertainty in the true value of the pixel, which shows itself as variability in the results of a given measurement. For instance, if we expose the chip to a constant light source for a fixed period of time, measure the number of photons being captured by the pixel, and repeat this 10 times in exactly the same way, we will not always get the same result! The degree to which the results fluctuate is referred to as noise, and it can be quantified. Noise is related to 3 main effects:
"Photon Noise" is a
property of the signal from collected light (i.e., the desired signal
sky background). Photons arrive in packets, at irregular
it's this unpredictability in arrival times that generates their noise,
is also referred to as shot noise;
2) "Dark Noise" is a property of the signal generated by thermal electrons (mentioned above);
3) "Read Noise" is another layer of variability in the signal that is introduced by the chip amplifier responsible for converting the analog signal (i.e., the electrons in each pixel well) into a digital signal that our image processing program can use. In simple terms, if the pixel had 100 electrons to read out (and remember that this value itself is subject to the noise mentioned in points 1 and 2 above), then the analog to digital unit converter might read this out as 96 electrons instead of 100 (for example). This extra layer of variability is referred to as read noise.
Noise is unavoidable and is
largely contributed by sky background (light pollution).
you expose, the more photon noise you will have, but the greater the
acquiring your desired signal. So think of photon noise as a
evil. Dark Noise is also unavoidable but can be minimized by
the CCD chip. That brings us to Read Noise. As stated above, read noise
fixed amount of noise that is caused by the ADU converter every time an
is downloaded from your camera into your computer. Every camera has a
amount of read noise (some less than others- check the
In contrast, photon noise is mainly due to sky background and is
your imaging site. At a given imaging site, sky background is
proportional to the subexposure duration. If your subexposure
time is too
short, the sky background noise will be minimal (and so will your
signal), and your image will be dominated by read noise (your exposure
"read noise limited," which is not ideal). Conversely, if your
subexposure time is long enough, sky background noise is very large
your read noise, and you essentially drown out the effects of read
Your image is said to be "photon noise limited," which is good.
In other words, by exposing your subs long enough so that the sky
noise overwhelms the read noise, you effectively minimize the influence
noise in your image. Once you reach a subexposure duration where
noise contribution becomes less than 5-10% of the total noise in the
there appears to be no major advantage to prolonging the duration of
subexposure further. If you are interested in learning more about
please check out my subexposure
page for additional details.
It follows that subexposure duration is largely dependent on two factors, sky background noise and read out noise. We want to aim for a subexposure duration that will reduce the read noise contribution to about 5-10%. John Smith has done some nice work in this area, and his subexposure calculator is very instructive to use. I have also analyzed subexposure duration and provide an alternative subexposure calculator for this purpose. So what does this have to do with the choice between NABG and ABG cameras? Here are some "bottom line" observations that are useful to consider:
At a dark site, sky background noise
is very low. In order to generate enough sky noise in an individual
to drown out the read noise, the subexposure duration will therefore
have to be
quite long. Subexposure times in the range of 30-60 minutes
may be necessary at a dark site in order to get the read out noise
down to 5-10% for most CCD cameras. At f ratios in the f4-f8
NABG cameras will bloom like crazy during a 30-60 minute exposure,
type of camera impractical for a dark site. So a compromise has
made for dark sites- because of the need for long subexposures (it's a
not really a choice), an ABG camera is ideal in order to avoid
The lower QE of ABG cameras is accepted as a necessary evil. Most
want a higher QE, but they accept the lower QE of an ABG camera in
avoid the hassle of blooming. The fact that they are
imaging at a
dark site makes up for the lower QE in most cases, since there is
chance for detecting faint signals that are not drowned out by sky
2. For the rest of us who image in relatively light polluted sites, sky noise is much higher. Therefore, subexposure times in the range of only 5-15 minutes are usually sufficient to drown out the read noise contribution to less than 10% of total noise (it's really true- crunch some numbers using John Smith's calculator to convince yourselves). At my imaging site, where I've measured sky flux with the U32 on several different occasions, my typical subexposure times are in the range of 5 minutes for luminance, 8 minutes for RGB, and 10 minutes for Ha (6 nm bandpass), in order to achieve a read out noise contribution of around 10% or less. These exposure durations were not chosen by accident- they are chosen to achieve a low read out noise contribution at my imaging site. With relatively short subexposure times, the amount of blooming seen in a typical star field with a NABG camera is generally easy to manage with currently available software. And given the need for shorter subs based upon sky noise (it's a need, not really a choice), one could make the argument that it's better to have a chip with a high QE, such as a NABG camera, in order to maximize signal during a relatively short subexposure. It's ironic that the presence of light pollution makes a more sensitive NABG camera a viable option, whereas those under dark skies often must use a camera that is intrinsically less sensitive (ABG). Despite all of these considerations, it is not necessary to use a NABG camera just because your subs will be relatively short. You could certainly choose an ABG for light polluted skies, realizing that you will need a longer cumulative exposure to compensate for the lower QE of such a camera. Still, an ABG camera will permit you to take photos of objects such as the Pleiades and M42 without worrying about blooming (which will occur with a NABG, even at short exposures, for these types of bright targets).
Putting it all together
Pixel size / Image scale:
If your seeing is average (applies to most of us), consider cameras
sizes that will yield an image scale in the range of 1.0-3.5
you have a good mount that guides well, aim for the lower end of this
scale range in order to maximize resolution. Much below
is usually wasted effort for most of us, since conditions are often
seeing-limited. A bit higher than 3.5"/pixel can be fine as well,
long as you don't mind a softer look to your images. These are
general guidelines- don't get hung up on any of this. You can
great astroimage even if you are not using the ideal Nyquist value for
scale, although staying within this general range (1.0-3.5
arcsec/pixel) is a
good idea for most of us. These rules don't apply for those with great
where optimal image scales would be well under 1.0 arcsec/pixel.
All things being equal, get a camera with a higher dynamic range (full
capacity divided by read noise).
Don't worry too much about this, given the quality of modern-day,
If it's there, you will remove it with dark frames. If it's not, you
Certainly, if you have two cameras that are equivalent in all other
aspects (image scale, dynamic range, read noise, QE, etc.), then get
that has the lower dark signal. However, after reading this
looking at camera specifications, you will see that it's not always
simple. A camera may have a higher dark signal, and yet have
such as better dynamic range and higher QE that make it a more
choice, despite the need to dark subtract.
NABG versus ABG:
If you image at a reasonably light polluted site which requires short
subexposures, you have a choice of either NABG or ABG cameras.
cameras, the QE will be higher, and the blooming will be manageable for
star fields over relatively short subexposure durations. For me,
camera was a logical choice for my second camera, especially since I
interested in a chip with high Ha sensitivity, and since my subexposure
would be short at my imaging site. I've been pleased with the Ha
the KAF3200 chip (U32 camera), and it's related in large part to the
feature. If you are just starting out and plan to take lots of
bright star clusters like the Pleiades, then an ABG camera is perhaps
better choice, since you won't have to deal with blooms (my first
the SXV-H9, which is ABG). You can always take longer cumulative
exposures to compensate for the lower QE of an ABG camera, if you have
and patience. If you are at a darker site where you must use longer
subexposures, an ABG camera is the way to go.
Chip size: The CCD chip
dimensions and your scope's focal length will dictate the field of view
of your images. Ron Wodaski's calculator mentioned above will
field of view for a given camera/scope combination. The
easy: FOV (in degrees) = 57 x CCD chip dimension (in mm) /
focal length (in mm). However, a bigger chip is not always
Remember that with a larger chip you will be more likely to have
1) field curvature if your scope does not provide a flat field over the
entire surface area, 2) vignetting, 3) camera sag/flexure due to
camera weight, resulting in non-orthogonality of the chip to the
which introduces optical aberrations at the edge of the field (do not
underestimate the frustration that can occur due to this last
you have to crop out a significant amount of the image due to oblong
severe vignetting at the periphery of your field, you would have been
off with a smaller-sized chip in the first place (less money, less
smaller files, etc.). Make sure that you talk with the telescope
or e-mail astroimagers who are using specific types of telescopes, to
whether a given scope will support the chip size of the camera that you
versus one-shot color:
I didn't discuss this above, but will mention it briefly now. One-shot
CCD cameras have lower sensitivity and resolution compared to
cameras. The Bayer matrix present in one-shot color cameras is
for this problem, and there are many websites that discuss this issue
detail. However, it's possible to take nice images of relatively
objects with one-shot color CCD cameras, and you wouldn't need to
invest in a
separate filter wheel and costly filters. Note that
color cameras seem to be very susceptible to the effects of light
and most require some type of LPS filter in the imaging train, which
further decrease sensitivity. Most CCD imagers use monochrome
but the one-shot color CCD camera is a viable alternative, as long as
realize the potential downsides (decreased resolution being the most
problem in my view).