Forums / The Science / Photometry / Formulae for extinction and transformation coefficients in the intermediate reduction spreadsheet

# Formulae for extinction and transformation coefficients in the intermediate reduction spreadsheet

I've only just recently started browsing Citizen Sky. I have read the DSLR photometry tutorials, and looked at the beginner and intermediate reduction spreadsheets.

I have a bit of familiarity with data reduction, as I have been doing two-colour photoelectric photometry for a couple of years. I have two questions, both relating to the intermediate reduction spreadsheet, which I downloaded two days ago.

When I look at cells B42, C42 and D42 (calculated values for the extinction coefficient, transformation coefficient and zero point respectively), they all contain the same formula: {LINEST(I19:I24,F19:G24,1,1)} Incidentally, the same formulae are repeated in cells B43, C43 and D43, which contain the calculated errors for the same parameters. My question is: are these formulae correct?

The second question is a more basic one relating to the procedure for determining extinction coefficients, as described in the DSLR tutorial, and using the indermediate reduction spreadsheet. Unless I've read them incorreclty, the tutorial and the spreadsheet give the impression that an extinction coefficient can be calculated from a "single" image (in reality, I presume a series of images stacked), of the six comparison stars. The classical procedure for extinction coefficients, on the other hand, requires the measurement of either (1) an A0 star, or (2) a comparison star for differential photometry, with instrumental magnitudes determined at intervals during the evening as the star passes through a changing air mass. Could someone clarify the procedure for using the intermediate reducation spreadsheet for this purpose?

Hi Roy and Heinz, Indeed the formulation for the extinction coefficient seems odd, but as Heinz pointed out above the large field of view gives us an advantage in that we cover a fairly big angular extent of the sky. So if you look at the calibration equation: (V - v)_{i}= -k' X_{i}- e * (B-V)_{i}- Z (following the convention discussed on this page:http://www.citizensky.org/content/calibration-intermediate) and imagine you have a series of stars that must satisfy this requirement, you have a system of equations that need to be solved. Because they are all in the same field of view, taken with the same camera under the same conditions, many of the classical problems of photometry (i.e. changing sky conditions) are at best second order effects (with the exception of clouds).From linear algebra you must have at least as many equations as unknowns, so with -k', e, and Z being the unknowns, we require three stars. The larger their difference in airmass and in color the better (although I'm sure there is a limit, but we haven't found it yet). Sadly, there any observational method also has errors, so with just three stars you get a close approximation to the "right" values for the coefficients and zero points so we request that people use at least six stars that bracket the target object in airmass and color. So far our single field of view test have shown remarkable consistency, even with CCD and PEP data (see the April edition of Sky & Telescope for an example). Interestingly this calibration technique permits observations in non-dark-sky areas too. I think it was Tom that had a few snapshots with a full moon near the FOV and his extinction coefficient was way off from the normal value, but the finding the best fit to the data (through the linear least squares estimation processes in the spreadsheet) permitted him to find acceptable parameters given the dramatically different observing conditions. If my memory serves me correctly, his estimate was within uncertainties of PEP values that night. We do have one problem though, where does the (B-V) for the target object come from? For small variations in color we're ok, but for things that change color dramatically (novae, etc.) our calibration method may not work well. The DSLR team has been looking into this issue and I think Roger (right Heinz?) has a nice solution using DSLR RGB... we're just waiting for him to publish his results. Clear Skies, Brian

>I think Roger (right Heinz?) has a nice solution using DSLR RGB... we're just waiting for him to publish his results. Exactly. Roger is proposing to get completely rid of the color correction on the magnitude level by using a kind of "photometric" DSLR whitebalance already in raw image extraction. Instead of using just the green channel(s), you would do a linear (linear in ADUs!!!) correction using blue and red channels as well. This would be very easy to implement in software, probably even allow using existing software. Alternatively, I tried using a linear fit of the difference in (uncorrected) magnitide of the Bayer-matrix Blue and Green filters to the catalog (B-V) values and the fit seems to be ok, implying that you can get decent (B-V)estimates by evaluating the blue and green channels of the DSLR. But of course Rogers method as described above would be easier to use, unless you are really interested in teh (B-V) values themselves (not just for color correction). CS HB

Hi Heinz and Brian, Thanks for your replies. I need to learn about spreadsheet formulae that use matrices! On the issue of determining extinction coefficients from single frames, Inoticed how wide the DSLR fields are and thought that might be the answer. I'm still intrigued that the method works, and clearly you have found that to be the case. I find it particularly interesting that a wide range of star colours is used. If extinctions were determined using classical methodology for each star individually in such a group of stars, the slopes of the plots would vary depending on the colour of the star. So, for determining extinction using just asingle field, the readings for each star would be varying according to both (1) the colour of the star, and (2) the air mass for the star. In view of that, I've still got to get my head around how the method actually works. This if fascinating stuff. I've been holding off getting into CCD photometry until Icould decide what camera to use. After browsing the material we've just been discussing, the decision is now made, and I'll be obtaining a DSLR. Roy

Hi Roy, It's cool, isn't it? Let's give it a simple case. Imagine you have a some datain 3D that fit perfectly into a single plane. You can find the slope of the plane in the x-z plane, the slope in the y-z plane, or both at the same time (by finding the equation of the plane in which the data reside). It's this third method we're doing. Whoops... gotta go. Almost time for a meeting (it's been a horrible day for meetings). Brian

Hi Brian, That's a great example. I think I've thought of a way that might show what you mean. Image the field on a few occasions over 2 or three hours, then tabulate (I think)V-v, Air Mass, and B-V in three columns of a spreadsheet, and do a 3D plot. If IRIS was used to obtain the instrumental magnitudes, it would be necessary to use the Automatic Photometry function. Manually obtaining the instrumentally magnitudes for each star for each frame would be very tedious.

Hi Heinz, all, Just some news: now I have fullyimplemented the Pickles spectrum library in my simulationand it works well. That library includes 131 spectrum being synthesized from number of other libraries. It covers most of the HR diagram. I had to search a lot of documentation to find the right information on how that library has been built and how to use it, it took me some time ! Now I think I can answer what are the limits of the color transforms for a DSLR. It works very well for (B-V) from -0.2 to 1.3 for themain stream stars and up to the giant ( V~III ) at one air-mass. The errors are below 10 mmag, even 5 mmag inmany cases. For luminosity I and II the error is somewhat higher, about 10~15 mmag and up to 40 mmag for the redder V~III (1.8). There are3~4 spectrum(M7III~M10III) that are totally incompatible with any transform,showing very large error (suchhave nearly no continum in the B& V bands ! ) My color compensation technique shows errors somewhat smaller than the classical transform. I have also tested ways to compensate for the blue extinction, the atmosphere reddening, it works well too on simulation. Now the issue is to find an easy way to implement that reddening compensation in the actual process, this needs a calibration that is not straightforward. Yours truly, Roger

Hi! It looks strange that there is the same formula in all cells, but the explanation is simple: this is a matrix formula (indicated by the curly braces). So all the cells contain the matrix formula, but when computed, the cells' values represent the different cells of the matrix and will be different. I don't see an error here. As for the second questions: Indeed, the spreadsheet is an attempt to do extinction correction from a single stack of images. The reason why one can hope to get away with this simple procedure is that a DSLR with telephoto lens has a rather huge field (note the comparision stars mentioned in the spreadsheet). When you take images with a DSLR and telephoto lens at high airmass (otherwise you don't need extinction correction anyway), you should have a "gradient" of extinction within the field, and the spreadsheet will try to compensate for it. The technique that you discribe is far more time consuming, and I thing it would be more appropriate for the "expert" skill level than for the "intermediate" skill level. But yes, it's a trade-off of precision versus ease of use, I guess. CS HBE