The image shows the sky area between the constellations Phoenix and Sculptor, and comet Lemmon is the green fuzzy thing below and right from the center (you may want to click on it to see it full size). It was taken from here on March 11, 2013, around 21:40hs.
It doesn't look too great, uh? Well, here at the right you have a 100% crop of the comet area, including a 4 times enlargement of the head. Now it looks much better (again, click on it, I'll wait here). The head and tail are clearly visible, showing the characteristic greenish color of this comet. This particular color is due to cyanogen and diatomic carbon gases being released from the nucleus as it's heated by the Sun. Comparing the luminosity of the comet head with nearby stars, I would say its apparent magnitude was around 6.7 at the time. In the thumbnails below you'll find two other images. The first one is a black and white version with the brightness and contrast increased so the faint tail is more visible (the head gets overexposed). The smudge above the comet is NGC 55, an 8th magnitude irregular galaxy located some 7 million light years away. The second image shows a wider view of the area with all the stars of magnitude lower than 5.5 identified.
Granted, this is not the best image ever of the comet (Google is your friend, look for them), but what makes it more striking to me is the way I took it. I didn't use a telescope. I didn't use a tracking equatorial mount. I wasn't even in the best possible location: even in a high altitude and dark place like Bariloche, being at the shore of a wetland and below street lights is not astrophotography's definition of "best location". What follows next is a description of how I got away with it.
A humble journey into astrophotography
First thing first: this is not an astrophotography tutorial by any means. I'm far from being qualified to do that. This is just a collection of ideas I hope can help other people in my situation, shooting the sky with the instruments I have at hand. It will get a bit technical at some points, but I think it's good to have a basic idea of why things are done like that (and anyone can point out if I'm wrong somewhere...). It will also focus on deep space objects, so keep in mind that some points might not be relevant to "brighter" things like planets. What I've found after a couple of sessions (and specially after processing these images) is that taking good pictures of the sky can be surprisingly easy if done right.
Astrophotography is no different from any other photographic style: you need to get the right amount of light to the camera sensor. The problem here is that the "available amount" of light is usually much lower than the "right amount" of light. We can fight this limitation both during picture taking (getting the most possible light) and afterwards during image processing (basically using image stacking). Even if camera hardware is a very important factor, most of the time it is as a trade with image processing. The better the hardware available, the less processing is needed for the final image. I'm currently using a Canon 5D Mark II, and its full frame sensor is a great advantage in low light (lower noise). The pictures in this post were taken with a Canon EF 100mm f/2.8 USM Macro Lens, which is very sharp wide open and maximizes the amount of light available for the image. But I'd say that any current DSLR with a decent lens is enough, if one is prepared to spend the needed time taking and processing the images.
When we take pictures under normal lighting we learn that "the lower the ISO, the less the noise". One thing that surprised me when I started researching this subject was that the lowest ISOs are usually not the best choice in astrophotography. When photographing the night sky, you're normally taking a wildly underexposed image. Under those circumstances in many cameras the signal-to-noise ratio (S/N) will be limited by read noise, the noise introduced by the analog-to-digital converter (ADC). Crucially, the ADC sits behind the sensor amplifiers, so read noise it's independent of ISO. In terms of read noise it's much better to take a picture at ISO 800 than to take it at ISO 100 and push the exposure 3 stops in post-processing, because in the second case you're multiplying the read noise by 8. On the other hand, at very high ISOs the dynamic range starts to drop due to the maximum pixel capacity, which harms very low signals. The result is that the ideal ISO is somewhere in between (and it's camera-dependent). Typical numbers I've seen for the 5D Mk II optimal point are ISO 800 and 1600, I personally prefer the former (see note 1).
The only other variable under our direct control is exposure time (assuming you use the widest "decent" aperture of the lens). The statistics of light detection means that the S/N gets better as the square root of the signal (it follows a Poisson distribution). This is called photon-count noise. For a fixed ISO, the longest the exposure time the higher the signal and the better the S/N (others noise sources aside, see note 1 and note 2). But then we hit another limitation: the sky is moving! As with any other moving subject this limits how long we can expose the sensor before the movement blurs the image, and it depends on focal length and pixel size (note 3). I've found a useful small app that allows one to estimate the maximum exposure time given those variables plus the sky object declination. The best thing is to test different times around this starting point, and look for the longest we can live with given the subject in the picture (a wide view of the galaxy is more forgiving than a star cluster at the same focal length). You can see in the pictures of the comet that star trails begin to be apparent for these 5 second exposures at 100mm. Although the effect is not dramatic on the comet itself, maybe a shorter 4 second exposure would have been a better choice.
Once we've decided which is the best camera configuration, it's time to get the picture. Or pictures, actually. Unless our subject is relatively bright, a single exposure will look extremely noisy like this:
This is exactly the same sky area of the first image, but corresponds to a single unprocessed picture. Comet Lemmon is inside the circle. To the left there is an animated version so you can better judge the change between both images. There are basically two types of noise affecting the unprocessed image. The first type is deterministic, and is produced by the slightly different dark current at each pixel that result in different background levels (you've probably heard of "hot pixels" before). The second type is random noise, and is the result of the combination of read, thermal, and photon-count noise. Both types of noise are present in every image we take, but only dominate the final result when the useful light signals are very low.
An important part of astrophotography image processing is to get rid of this noise, and there are defined and proved ways of doing it. For example, to reduce random noise one relies on statistics: if we average N pictures, the typical random noise in the final image is reduced by a factor square root of N. This process is called "image stacking", and can drastically increase the dynamic range of the image. Average 10 pictures and the noise reduces by 3.3 (1.7 stops), average 100 and the noise reduces by a factor of 10 (3.3 stops). Notice that, due to the involved square root, the real benefit of doing this reduces as the number of pictures is increased. In practice, a number of shots between 10 and 100 is normally the best compromise between noise reduction and time required. As the sky rotates between shots, the images must also be correctly aligned before the average is done. I use the free program DeepSkyStacker to automatically align and stack the pictures (see note 4).
To remove the deterministic part of the noise, on the other hand, we should do a "dark frame subtraction": we subtract from the sky picture (the "light frame") a second picture taken with the lens cap on (the "dark frame"). This dark frame must be taken at the same ISO, camera temperature and exposure time as the light frame, because dark current depends on all of that. Dark frames suffer from the same random noise problems as light frames, so we must also do an average of several dark frames to avoid introducing extra noise into the final image. This master dark frame should consist of at least the same number of shots as the light stack (see note 5). You can either take the dark frames in between light frames or to take them all together at the end. I prefer the former (5 light, 5 dark, repeat) to be sure any temperature variation along the session is properly taken into account, but I must be careful not to move the camera when covering the lens. DeepSkyStacker automatically processes all the dark frames to produce a master dark frame, and then subtracts it from each light frame before aligning and stacking them. After DeepSkyStacker has exported the final 32bit FITS file, I use SAOImage DS9 to separate the three color channels and I feed them to a small command line program called STIFF, which produces a gamma corrected 16bit TIFF. I do all of this in Linux, running DeepSkyStacker under Wine.
In this case in particular, the final comet image is a stack of 50 light frames 5 seconds each (equivalent to a total exposure of 4 minutes and 10 seconds) and 50 dark frames. As I said before, they were taken at ISO 800 with an aperture f/2.8 and at a focal length of 100mm, in a full frame DSLR. The processing should have given me around 7 times more dynamic range (2.8 stops extra) with respect to a single picture. Looking at a digital star map (Stellarium) I can easily spot stars down to 12th magnitude in the final image, which under the circumstances I think it's amazing.
Note 1: There are couple of extra points to take into account. If exposure times are very long (several minutes), thermal noise dominates S/N. This is proportional to the square root of both exposure time and ISO, so it should be constant for a constant exposure value (a longer time is compensated by a lower ISO, and vice versa) (see also note 2). The second point is that in many cameras it seems only the full stop ISOs (100, 200, 400, ...) are real hardware amplifiers, the others are just digital corrections implemented after the ADC or directly in software. So there is usually no advantage in using non-full stop ISOs here.
Note 2: The dark current (the source of thermal noise) has another harmful side effect: it fills the pixel with non-photon produced electrons, which reduces the dynamic range and lowers the S/N for very long exposure times.
Note 3: Of course, the ideal solution here is to use a tracking equatorial mount, which matches the rotation of the sky so is looks still to the camera. However, good small equatorial mounts are expensive enough to require a certain level of astrophotography commitment (which I don't have at the moment), and inexpensive mounts are not solid enough to support a DSLR with a (possibly) big lens attached. Probably a do-it-yourself barn door tracker is the solution, I'm yet to build one (for example, this one or this other). However, take into account note 2.
Note 4: I feed the program with the original RAW files from the camera because then it works on the images before Bayer Matrix interpolation is done. This should minimize the cross talk of noise between colors with respect to working on already developed 16bit TIFF files (in any case the later must be linear, not logarithmic).
Note 5: How much noise is introduced by the master dark frame will depend on how photon-count noise compares to the other two noise sources (dark frames have only read and thermal noise). If photon-count noise is the dominant noise source then using a similar number of dark and light frames won't significantly increase the final noise. But if read or thermal noise dominates (which can happen for very short or very long exposures), then a similar number of darks will introduce around 1/2 stop of extra random noise in the subtracted image. In this latter case, a bigger number of dark frames should be considered. Notice also that most modern DSLRs can perform in-camera dark frame subtraction (they call it "long-exposure noise reduction"), but there are practical and theoretical reasons to do it manually if many pictures are involved. If you're doing it manually, you must make sure this option is disabled on the camera.
1 - ClarkVision.com
> f - The Canon 5D Mark II Digital Camera Review: Sensor Noise, Dynamic Range, and Full Well Analysis
|(esta entrada en español)|