Most of the
objects that we image have a
(however you wish to measure it, e.g., photons per minute per unit
surface area, electrons per minute per pixel, etc) that is much lower
than sky flux. And yet we can easily image even faint objects. The
key is to get the noise level low enough so that the faint signal
rises above it (i.e. so that the S/N ratio is greater than 1). That
is the crux of the analysis presented on my webpage.
Imagine the signal acquired by a single pixel during a 10 minute sub,
with a sky flux of 1000 e/min, and an object flux of only 5 e/min. In
10 minutes, that pixel would accumulate 10000 electrons from sky glow,
and 50 from the object, for a total of 10050 electrons. The question
is, can we detect those extra 50 object-specific electrons (i.e., can
we tell the difference between 10000 and 10050)? In a single sub, we
cannot. Ignoring the effects of read noise and dark noise for the
sake of simplicity, the S/N ratio would only be 50 (object specific
signal) divided by noise contributed by sky glow (sqrt 10000), or
50/100=0.5 (i.e., it's less than 1, so the signal is buried in the
noise). But if you stack 10 subs, the noise from sky glow would be
reduced to [sqrt(10 * 10000)]/10, or 31.6. Now the S/N ratio is
50/31.6, or 1.58, and you begin to appreciate the presence of the object.
And a follow-up to this:
PS- The numbers
in my post were only used for illustrative
Also, in addition to stacking, the other obvious ways to decrease sky
noise are to use a narrowband filter, and/or to move to a dark site
(this isn't practical for most of us). Both will reduce sky flux, and
therefore sky noise. In each of these examples, I am assuming that
your subs are photon noise limited.
These are examples of noise reduction as opposed to signal
improvement, but nevertheless they will increase S/N and permit
detection of an object whose light flux is below that of sky flux. At
my imaging site, that is key.
Given how faint our objects are, and the limited ways that we can
improve on signal (higher QE chip, larger aperture), noise reduction
gives us the most control over improving the S/N ratio.
The 16803 ABG
will bloom. It's just a question of how many
are generated in the well by a given star, what the well depth is, and
how much ABG protection there is.
I took a quick look at the 16803 specs on the Kodak site to answer
your question in a more quantitative way. The 16803 chip has a well
depth of 85,000 electrons and 100x ABG protection. 100x antiblooming
protection means that once you fill your well, one out of 100
remaining electrons will spill over into the adjacent pixel, and 99
will be removed through the ABG process. In the same way, 1000x
antiblooming protection means that once you fill your well, one out of
1000 remaining electrons will spill over into the adjacent pixel, and
999 will be removed through the ABG process.
So if you are using the 16803 chip to image a star that generates
200,000 electrons for a given pixel, 85,000 electrons will fill the
well (i.e., the full well capacity), and you have 115000 electrons
left over. But of these 115000 electrons, only 1150 (1%) will remain
to spill over, whereas 113850 (99%) will be removed. Depending upon
the brightness of the star and the status of adjacent wells, you might
see a bloom, but you get the general idea.
This question prompted me to do the same analysis for the 11000 chip,
since that's what I use. This chip has a slightly lower full well
capacity of 60,000 but better ABG protection of 1000x. So for the
same 200,000 electron star, 60,000 electrons would fill the well,
140,000 would be left over, only 140 of these (0.1%) will remain to
contribute to a bloom, and 139860 (99.9%) will be removed. This is a
good example of how a more aggressive ABG protection can compensate
for a slightly less full well capacity in the prevention of blooming.