These are the essential parts of charaterizing a CCD camera, perhaps they apply to CCDs as well. Edit as you please. (I can dig up a few references on CCD characterization if anyone needs them, Brian). We might also want to reference these pages:
CCD Characterization normally consists of:
Spatial Frequency Response
Signal to Noise Ratio (including a discussion of Gain)
Bayer-pattern (e.g. RGGB) [note HBE: I think we need this for cammand like "split_cfa a b c d" in IRIS ]
2 - Linearity (RP note)
In stellar photometry process non-linearity of the camera could affect the magnitude difference between the comparison star and the target star by different ways: saturations, upper clipping, shape of the response, bias or clipping at the foot of the response curve... The impact on the final result is also function of the difference of magnitude of the stars: largest magnitude difference means more critical linearity of the light response of the camera.
(Here a response curve will be inserted)
By the way it's important to know the light response curve of the camera, but accurate light measurment is not an easy job and normally needs expensive equipment and a well design facility. But let's try !
2.1 - Saturations:
The ADU count of the brightest pixels of the brightest star could either be limited /clipped or reduced by the saturation of either the imager (light sensor) or the ADC range (analog to digital converter) Usually that range is of 12 or 14 bits on recent DSLR, that's a 4095 or 16383 ADU count. The true range could be reduced by an offset at the black level. Some brand don't use such offset, the black level is at zero. Others (ie. Canon) often set the offset of the 12 bits range at 128 and such of the 14 bits at 1024 or 256. That offset "by design" shall be substracted from any ADU count to restitute the black level at zero. A DC component of the dark current signal should also be substracted in case of long exposure. In general up-to-date imagers don't show large DC component for short exposure (up to several seconds at 20°C, a couple of ADUs only) Such offset design could be considered reducing the range but it ensures that any negative part of the signal will not be clipped (ie. read noise) and the background well determined. This is also very helpful at calculating the noises at black level.
The imager saturation is due to the maximum electron capacity of the pixel, the said well depth. It occurs at the lowest ISO setting of the camera (or gain of the electronic amplifier chain from the pixel photodiode - or photogate - to the ADC range) That saturation is more or less progressive depending the imager design (anti-blooming) and would fully saturate about 80~90 % of the range. The typical well-depth of present imagers (CCD or CMOS) is about 900 electrons per micron square of pixel, that's about 24000 for a pixel of a typical 12 Mega pixel APS-C format.
The ADC saturation occurs at higher ISO settings and usually is just a raw clipping of the signal at its range maximum. It corresponds to an electron charge of the pixel equal to the well-depth divided by the extra gain (the ratio of ISOs) It's clear at such upper ISO settings the maximum SNR (signal-to-noise-ratio) can't be reached.
2.1.1 - Testing saturations
The ADU level of both saturations is easy to determine (the difficult part is the range linearity)
- The camera shall be pointed (preferably on tripod) to a white target (ie. a good non-glossy photo paper) That target should preferably be illuminated by very stable day light (ie. direct sun without nebulosity, no moving things in the area.. ) as uniformly as possible.
Various light sources powered by the power line are very unstable and would need a regulator.
Another good solution is to illuminate the imager without a lens using a white LED powered by battery. It's very stable. A diffuser should be put in front of the LED that to eliminate its radiating pattern (simple white paper works well). A distance of one meter provides enough illumination uniformity on the imager.
Anyhow ensure that the illumination is stable, our vision is not very sensitive to light variations, the ADU count is !
Testing based on varying exposure time:
- Set the camera in raw mode, fully manual, set the ISO at minimum, the color balance at "sunny", eliminate any lighting control option (D-lighting ? ), display the color histogramme if available.
- Look for the speed and aperture that starts to saturate the histogramme. Set the aperture as large as possible, preferably the maximum (to avoid grain and to avoid repositionning error of the stop) but enabling to cover a range of speed starting at 200 % the saturation level of the histogramme and enabling to reach at least a 5% exposure level below saturation.
- To enable the estimation of shot noise and gain at the same occasion it will be helpful to defocus somewhat (but not too much) That will ensure no (not too much ... ) grain or non-uniformity of the target would be added to the noise.
- Next take shots at each available speed in the 200 % to 5 % range.
- Take notes of picture number and corresponding speed.
- After you could set the speed somewhat below the saturation of the histogramme and take shots at several other color balances. During the analysis of the raw picture files that will enable you to check the red/blue ratios and ensure the "raw" format of the camera is a "pure raw" at least not affected by any color balance processing at the camera (or the "raw" format decoding software - ! - color balance and other rescalling info are in the raw and could be applied by such software)
- Next you would repeat the series of shots at further ISO settings, at least the second one, that to check the saturation of the ADC (Normally the full 12 or 14 bits capacity minus few ADUs). Values of speed should be shifted in relation with the new ISO setting.
Testing based on the gain determination:
This is even simpler if you are not interested to get the light response curve. To see the signal-to-noise-ratio section.
2.1.2 Saturations data analysis ( also bias and clipping)
188.8.131.52 Generic process
- Substract the nominal camera offset (if any) and the DC component of dark current signal, result should be signed integer (Equation 1)
- Compute the average value of the pixels of each RGGB plan: this is the intensity in ADUs at the center of each picture (Equation 2)
make a bilinear interpolation of the 101 x 101 residues,
substract the interpolated values from the residues (fine flat correction without extra noise)
take the square, (equation 3)
divide by the number of pixels: 10201,
take the square root
184.108.40.206 Using IRIS... to come next time !!
220.127.116.11 Analysis of results
At that point you could discover your light response curve shows several bumps, seems a lot irregular. This is due to the actual speeds of exposure that are not at the value displayed on the camera, either in the Exif or in the raw metadata ! At date this is the case of all cameras I know... It's not too much at issue when considering just to determine the saturations levels but very annoying in determining if the core response of the imager is well linear. We will hereafter review tricks to overcome that problem.
(insert here a couple of exemples of 100 ISO light response curves)
The curves for the lowest ISO will show you the imager saturation level (about 13578 ADUs for the exemple x) exept the bumps the core of the response usually looks linear. More details analysis is needed to determine if a section on the upper part shows some deviation and should be avoided if high accuracy is targeted.
The foot of the curve should point to the zero of the graph otherwise an extra offset would be suspected and will need more accurate analysis (hereafter) Such (usually small) offset introduces increasing errors on faint stars if its level fully clips the background. That situation is not usual with DSLR or DSC but often occurs with small cameras (webcam like) used by amateur astronomers.
The red and blue response are usually well below the two green ones, their relative levels depend from the temperature of the light source being used (day light curve to be added) It's interesting to note the level ratios for day light as a white reference for further use.
The "raw" levels of red and blue are usually near 50% of the green level, this is due to the filters transmission (we will see in ... the electronic gain is well the same) Here we could note a raw Bayer is in fact mostly green: two greens against weak blue and weak red !
The curves for the other than basic ISO settings show a clipping on the ADC end of range, near 16383 in the exemple (curves for 200 ISO to be added)
..... following should be:
2.2 - Linearity of the useful response
If the objective is a 0.01 magnitude incertainty on a range of 2~4 magnitude it means the relative precision of the intensity response of the overall system shall be better than 1%. This is difficult to test and even not possible without a minimum of measuring equipment. It's obviously out of the capability of an amateur having no access to such facilities. Due to this I prefer to postpone that section and now focus on noise questions.
Anyhow tests that have already being done show that at least the lowest 75% of the range of recent DSLR is well capable of such linearity.
2.2.1 - Testing technique
2.2.2 - Linearity data analysis
5 - SIgnal-to-Noise-Ratio:
This SNR is a common concept in electronics to quantify the quality of the useful signal being perturnated by some unwanted other signals. Such are including random ones due to thermal agitation of electrons, quantum fluctuation, or technology related effects like dark currents. Here we consider only that cases, not the spurious signals that could also interfer.
Two of the sources of noise are the fixed patterns due to dark currents, bias and pixel sensitivity dispersion (part of "flat"). They have been discussed in the "dark-currents' section. They are supposed to be negligible or processed and not considered in that section.
The two other noises types are pure random signals, having no spatial or temporal correlation. Given today technology achievement they are near the only to consider for DSLR stellar photometry.
One is the classical thermal noise of electronic amplifiers (said Johnson-Nyquist noise, gaussian distribution) The reading of the electron charge of each pixel needs a very sensitive amplifier. That noise is the one generated into the input circuits of that amplifier. It appends at each reading: each shot get a "draw" of it independently of settings except ISO (gain of the amplifier). Its pattern in the picture is different at each shot. It is often named "read noise".
The second, and proeminent in most cases, is the said "shot noise" or quantum fluctuation of particles. Its distribution follows the Poisson's law. Its sigma (standard deviation) is just the square root of the number of electrons being accumulated in the pixel.
Both noises, generated at the pixel and the input of the amplifer are (like the useful signal or number of electrons) represented at the output of the ADC by a proportional ADU value. In the following we will name the ratio of ADU per electron as the "gain" of the amplification and analog to digital conversion chain. The ISO setting of the DSLR is nothing else than that "gain" setting.
Typical DSLR or DSC values are 0.2 ~ 1 ADU/e- at 100 ISO and proportional values at other ISO settings.
5.1 - Instrumental SNR characterization:
We will consider two aspects of the SNR, first (the simplest case) the "Instrumental SNR" at each pixel and next the SNR of a stellar photometry process (much more complex)
5.1.1 - Read Noise (and offset/bias):
5.1.2 - Shot Noise:
5.1.3 - Gain Calculation ( and saturation level )
6 - Bayer's CFA Pattern
In DSLR or DSC the R G B color information comes from selected pixels covered by microscopic color filters forming a Color Filter Array ( CFA ) Original works on the configuration of such filters have been pattented in the 70's by Bayer. Further filter configurations have been later published by Kodak and others. Most common are the RGGB (often Canon) or BGGR (often Nikon ) configurations:
RGGB means side by side groups of four pixels like:
The sampling of the image is twice in green than it is in red and blue. In addition the red and blue filters have a much lower transmission than the green ones (in most cases about 50%). By the way the green signal is far dominant in a Bayer's array. That more or less fits the sensitivity of the human vision. The CIE human vision sensitivity curve has its maximum on green and a strong decay at red and blue wavelengths (R 30%, G 59%, B 11%).
(Insert ATT_6-0_Bayer_CFA.png here)
We should also note that the fill factor is low in green and very low in red and blue due to the CFA pattern but also separations between pixels. There is a lot of inactive space between the sensitive sites. For this reason it is important to defocus star image (enlarged Point Spread Function, PSF) on a large number of pixels (50~100) that to avoid errors due to such sparse sampling. Defocus also contributes to strongly increase the electron accumulation capacity in a star image that enables to improve the SNR. Some star trail also helps.
There are other types of patterns including such using the YMC color system (yellow, magenta, cyan) or the Foveon (superposed photosites) YMC should not be found in DSLR or recent high end DSC.
From notes from Dave Coffin near all DSLR/DSC covered by dcraw (near 400 !) have RGB filters arranged in either RGGB or BGGR or GRBG or GBRG configuration (two greens in diagonal). Only few use specific configurations (old PowerShot )
6.1 - Pattern determination:
Usually this is not documented by the camera brand. The simplest way is to use "dcraw.exe" from Dave Coffin. The command to perform to get the info at the Windows console is:
> result = various parameters including the RGGB pattern
http://cybercom.net/~dcoffin/dcraw/ (source code only) various dcraw.exe compilations are available on the web for various OS.
(insert " ATT_6-1_DCRAW_info.txt " here )
Another way is to use the raw file reading of IRIS (which also uses dcraw.exe) If you select a small section of the picture by left click and dragging the mouse, next click right. The pop-up menu "window" command will separate that small section you could then strongly enlarge with the zoom. This will show you the pattern of the imager but as grey levels only (no color, sorry !) That grey levels is the only way we have to identify the filter color under IRIS.
In a white color balanced picture area the bright pixel lines in diagonal are the green. The dimmer pixels on the next diagonal line are the blue and red (You would need to adjust the "visualization thresholds") If another picture area has been exposed with a strong blue or red dominant you will recognize that brighter pixel.
Next get back to the "X1" whole picture. The Bayer groups start at the upper-left corner of the picture. Push the zoom at maximum. The first pixel at the corner in most patterns is either R or B and is the first of the conventional sequence RGGB or BGGR. A strong red or blue dominant color picture will enable you to identify that first pixel color.
Note:Do not refer to the X and Y coordinates of IRIS their origine is not " 0 0 " at upper-left corner, by the way their relation with the CFA is unclear.
Case of the " SPLIT_CFA C1 C2 C3 C4 " command of IRIS:
That command splits a four component raw Bayer file being loaded under IRIS into four individual " Cn.fit " files. The output is the raw values from the ADC. But nothing says what file is what color.
Due to the way IRIS numbers the pixel lines and the fact some cameras crop the number of lines at even, some at odd, it's somewhat a question mark what Cn.fit is R, G or B (with default INI settings)
For Canon 450D and Canon G9 for n=1 to 4 the order is G, B, R, G ( not usual ! ) For old Nikon's it's G, R, B, G (I have no recent one at hand)
The definition of IRIS coordinate and CFA system is shown in ATT_6-1 (tobe inserted here)
The simplest way to determine the Cn.fit color order is to use a known color image and look at the relative values in the Cn.fit file. Either using the "statistics" command of the pop-up menu (load the file, click left and drag to select a picture area, click right and select the command) or the "photometry 2 " (or 3) tool under "Analysis - Aperture photometry"
The two Cn.fit files having similar values (usually within few %) are the two G of the Bayer.
You probably use " eta AUR " as comparison star (to see ATT_6-0) That star is very blue ( -0.18 ), its brithest pixels are usually the blue ones (somewhat above the green, even in a pure raw Bayer), the red ones are very dim. For " zeta AUR " blue and red of raw Bayer are much dimmer than green and blue is the dimmest (a very red star at 1.22) Easy to identify Cn.fit color with it.
Case of the IRIS GUI command "CFA conversion" & "split RGB"
This command works after having loaded a "raw" picture file. It generates three FIT files expected R,G,B.
A certain number of DSLR types can be selected in the dialog box under the "camera" button of the menu bar. This works well for DSLR types of the list.
Other types could result in inverted colors. It's then easy to test the "CFA_conversion" command using any raw colored image: select a camera model of the same brand that seems near the used type, apply color balance, apply the conversion, if the picture show strong color errors try another camera model until it's ok.
We should note that "split_RGB" generates full size images in each color and obviously uses interpolation techniques to do it. Original ADC values are no more available with this solution. The larger size of files (X4) is also not very practical (at least for old computers... )
"split_cfa" seems preferable for photometry, its output is the raw ADUs from the ADC.