Digital Single Lens Reflex (DSLR) cameras are very popular among astrophotographers. DSLRs are typically cheaper than dedicated CCD cameras, they can be used in the field without a computer and, since they are meant to do that in the first place, they can be used to take any sorts of non-astronomy related photos. Yet they are able to produce superb images of deepsky objects, particularly when modified by removing the manufacturer's IR/UV filter built into the camera and replacing it with a different filter that allows photons associated with the Hydrogen alpha line to reach the sensor. Check Hap Griffin or Gary Honis websites to give you an idea of what can be done with a modified DSLR (and great experience, high quality equipment and great skies, I would add...don't expect to take images like these the first time you try!)
DSLRs, on the other hand, are not as popular among lunar and planetary astrophotographers. For a long time commercial webcams like the venerable Philips ToUcam Pro 840K combined with techniques like Lucky Imaging and great freeware software like Registax have provided astrophotographers with excellent tools for creating high resolution images of the Moon and the planets which would rival (or surpass) those obtained with the largest telescopes in the world before the digital revolution. What makes webcams excellent at imaging the Moon and the planets is their capability of shooting hundreds or thousands of very short frames (< 0.2s) in rapid succession. Using dedicated software applications such as Registax it is possible to carefully aligning and stacking the sharpest of these frames eliminating blurring caused by atmospheric turbulence (seeing). A definite improvement over webcams are dedicated planetary cameras that have been introduced to the market recently, like the cameras from The Imaging Source or Lumenera. The concept used by this higher-quality, more expensive cameras is the same: shoot many frames, select the best ones and align and stack them to beat the seeing. In theory a DSLR could be used to take many short exposure frames to be stacked later, but there are serious limitations with this approach. First of all the large sensor, which translates in very large (storage-wise) frames, limits the frame rate when shooting in continuous mode. For example, my Canon XSi cannot go beyond 3.5fps. Other more expensive models can go higher, like the Canon EOS 1D Mark III (10fps), but then another limitation sets in. While the camera shoots in continuous mode, frames are stored in a buffer before being stored on the memory card. The capacity of the buffer is limited, so the buffer fills very quickly. It takes only 53 frames in (high-quality) JPEG format or 12 in RAW format to fill the buffer of a Canon XSi (similar numbers hold for for higher quality cameras). When that happens the camera stops shooting until the buffer has been cleared. That kind of performance is nowhere near what a webcam or planetary camera can do when they can shoot thousands of frames without any interruptions. To be fair, saving a 640x480 pixel frame (typical of a webcam or planetary camera) is not the same as saving a 4272x2848 pixel frame (Canon XSi). The point is, though that the sensor size of a DSLR becomes a drawback when it comes to save many frames in rapid succession. That's too bad because DSLR have very low noise sensors and deliver great colours and better tonal range.
That said, most of the DSLR cameras on the market today support video output. For example, my Canon XSi comes with a composite video cable (the one with the yellow plug). The cable plugs at one end in the side of the camera (VIDEO OUT plug) and at the other end in the composite video jack of a TV or other video device. When I connect the camera to my TV set, the content of the LCD screen on the back of the camera is displayed on the TV screen. So if I preview a photo stored on the memory card of the camera, the photo (which would normally appear on the LCD screen) will appear on the TV. But here is the catch. My Canon XSi (like many other new and not-so-new DSLR cameras) has Live View capabilities. When Live View is on, live images are displayed on the LCD screen. So if you have your video cable plugged into a TV and Live View turned on you will enjoy a live feed from the camera to the TV screen.
Cameras like my Canon XSi do not have High Definition video capabilities (1080P) so they are somewhat limited in the kind of resolution they can provide, but still more than acceptable in my opinion.
The interesting thing is that the video signal is fed into the TV at 30 frames per seconds (fps), which is much higher than what a webcam allows for astroimaging. It is true that you can set 30 fps on certain webcams, but the signal generated is very noisy caused by the compression done by the camera firmware to be able to stream frames to the computer at such a high rate. That's why webcams are usually used at 5 or 10 fps: to minimize noise. However, if we could record a video at 30fps (similar to what dedicated planetary cameras can provide) using a DSLR the results could be interesting, given their much better sensors as compared to webcams.
But how do we record a video generated by a DSLR? The solution is a frame grabber device. I purchased a Pinnacle Dazzle DVD Recorder for about $50 and installed the drivers on my laptop (they come with the installation CD). Once you have a frame grabber you need to plug it into one of the USB ports on a computer. After making sure it is connected properly (the Dazzle has a green LED that turns on when that happens), plug the composite cable from the camera into the composite video jack (yellow) on the frame grabber. At that point you are ready to stream a video signal from the camera to your computer. To record the video you can use either the software application that comes with your frame grabber (it is on the installation CD), but I prefer to use the excellent (and free!) VirtualDub. Once your camera is connected to your laptop through the Dazzle, launch VirtualDub. Under the File menu select Capture AVI...If at that time your DSLR is the only video device connected to the laptop, VirtualDub will automatically display the content of the LCD display on the back of the camera. Turn on Live View on the camera and you will see live images displayed by VirtualDub on the screen of your computer. Now to record the video as an AVI file, click on the Capture menu and select Capture Video. At that point VirtualDub will start saving an uncompressed AVI file on your computer. The resolution of the individual frames is 640x480 pixels. Beware that at 30fps uncompressed AVIs grow really fast: a 5000 frames AVI of Jupiter shot with this technique generated a 3.1GB file in slightly less than 3 minutes! With modern hard-drives 3 GB is not a huge space anymore, but if your disk is next to full this is something you need to be aware of.
The next question to answer (which is fundamental to high-resolution lunar or planetary imaging) is: what is the minimum magnification required to ensure that the target is imaged with the highest possible level of detail (as permitted by aperture and seeing conditions)? The answer is provided by the Nyquist Theorem, a fundamental result of Signal Processing Theory. A corollary to the Nyquist Theorem says that a signal is sampled with enough detail if the smallest detail covers at least two pixels on the camera sensor. So if Δl is the size of the smallest detail allowed by the telescope aperture and μ is the pixel size (both measured in microns), then:
Δl / μ > 2
According to Diffraction Theory, Δl can be approximated by the following expression:
Δl = λF / D
Where λ is the wavelength used to image the planet, F the focal length of the telescope and D its aperture. Inserting the expression for Δl in the first inequality and resolving for F gives the following expression:
F > 2D μ / λ
I own a 10" Newtonian reflector (D = 254mm) and the pixel size of my Canon is 5.2μ. Assuming λ = 0.55μ (green light to which the eye is most sensitive), F > 4800mm. In the case of my telescope I need to find a way to increase the focal length by a factor of at least 4 (resulting focal ratio f/18.8). Eyepiece projection is one way of achieving this: the camera without a lens is connected to an eyepiece inserted on the focuser via a special adapter. I own an Orion Universal Camera Adapter that I use for eyepiece projection (a T-ring is required to connect the camera body to the adapter). Using the adapter with a 10mm Plossl eyepiece boosts the focal length to 9400mm and the focal ratio to f/36.8. This is definitely more than I need, but it also makes Jupiter 62 pixels wide on a 640x480 frame, which is almost perfect in my opinion (a bit bigger wouldn't hurt).
On the very early morning of July 12 at 1:24am MDT, I took a 5000 frame video of Jupiter from my backyard using the technique outlined above. At the 53.5 degree latitude North of Edmonton (Canada), Jupiter was only 14 degrees high on the SE horizon, just above the roof of my neighbour. I wasn't very hopeful in terms of quality of the results (seeing wasn't exactly optimal...) so I didn't even bother checking the collimation of my telescope. One thing I noticed was the fact that I had to set the gain to 1600ISO to get the planet show up bright enough on the laptop screen. I learned that changing the camera gain affects the brightness of the images displayed by the Live View. As soon as I reached the best focus I could get (not too difficult with live images displayed on the laptop screen) I noticed that some details could be discerned (the main belts, but also a couple of secondary ones, although intermittently) so I began to think that I could get something valuable out of this experiment. After Jupiter I slewed the telescope to the Moon and recorded a 4000 frame video file of the region along the terminator around craters Theophilus and Cyrillus. The Moon was very low, too: about 18 degrees that didn't help with seeing.
The day after I processed the AVI files in Registax 5. Instead of using the wavelet filter in Registax I used Focus Magic and the High Pass filter in Photoshop.
Here are the images:
Jupiter
Moon - Theophylus and Cyrillus
Overall I find these results very promising. Despite the bad seeing the image of Jupiter shows a hint of the blue festoons on the southern edge of the North Equatorial Belt as well as a trace of the North Temperate Belt. Incidentally, while I was about to publish this article, I found out that Jerry Lodriguss had posted an image of Jupiter on his website obtained using an almost identical technique. His image was taken using a Stellarvue SV70ED doublet refractor and eyepiece projection with an 18mm orthoscopic eyepiece working at f/32, so kudos to Jerry to get such a nice image with such a small scope! As for the Moon image, although inferior of what I obtained in the past using my Philips Vesta webcam, it's still promising. The detail is not bad given the circumstances. Among the various features mentioned in the annotated version, the Catena Albufeda is certainly the most challenging, but it is definitely there.
Reading Jerry's post I noticed that he used the zoom offered by Live View to increase the size of the image. That's certainly another feature to explore. If it works sufficiently well it can be used to mimic longer focal lengths.
Cheers!
Sunday, July 12, 2009
Subscribe to:
Post Comments (Atom)
Hi Massimo, I've recently purchased a Canon 1000d, it's nice to see what can be done using the live view and camera connected to a PC, I'm still trying to get used to the camera as far as setting it, unfortunately the weather has been very wet lately so I've not had a chance to experiment, I would appreciate any tips on the settings used. Regards from Jim Finis - Adelaide South Australia email jfinis@internode.net.au
ReplyDelete