combining exposures to reach an arbitrary SNR Goal RE: Takumar Pentax 6x7 tests RE: [ap-gto] [ap-ug] A colorful Southern Sky Beauty
toggle quoted messageShow quoted text
A few years ago I made a theoretical plot showing the impact of read noise on the SNR of stacked images
Were it not for read noise and ignoring quantization for the moment, 100x1 sec is the same as 1 x 100 sec or 10x10 sec when it comes to SNR and uncorrelated noise.
The quantization problem is pretty nasty with a camera moderately linear over a small number of ADUs. Each ADU “tick” corresponds to a fixed number of millivolts input into the ADC. With very short exposures you may only see a handful of electrons in each pixel and that may only give 4 or 5 ADU worth of signal. If you average or median combine a jillion such images, you will still only have the data spread over 4-5 ADU… it will not be a smooth high dynamic range image with a good SNR if you do that.
That’s the point I think a bunch of CMOS camera users are missing when they try to take a very large number of very short exposures to work around the flat fielding problems associated with the nonlinearity of the cmos cameras.
If the camera was intentionally made highly nonlinear such that at the low end a small number of millivolts corresponded to one ADU and monotonically increases to the point where a large number of millivolts corresponded to an ADU when you approach the top end of the ADU’s range, then you can work around some of the quantizing issues.
Cassini and Galileo did something like that but to compress the dynamic range so they could transmit fewer bits back to Earth. They used a Square Root transfer function and that cut the bit count in half.
To make the hardware simple, they used a lookup table implemented as a non-volatile memory to store the corrected values. So the ADU’s output were fed into the address lines of a non-volatile memory populated with the transformed values. They used a CCD so there was only one source follower amplifier to deal with.
The CMOS flat fielding problem can be fixed by first linearizing the data before flat fielding (using a look up table is a simple way) but in general each pixel has its own source follower and each one is a little different. So if you want to transform each pixel with its own linearization it would require as many look up tables as there are pixels. That can be a lot of bits in the flash memory. For a 1 megapixel CMOS sensor (pretty modest these days) assuming 10 bit ADC output feeding the memory (pretty typical for today’s CMOS sensors): that’s 1024 locations of perhaps 16 bit output word size (2Kbytes to transform one pixel) x 1million pixels = 2 gigabytes for a 1 megapixel sensor… (16 gigabits of memory)
Most likely a handful of lookup tables could be used to make a practical linearizer for pre-flat fielding linearization versus having a separate table for each pixel.
From: firstname.lastname@example.org <email@example.com> On Behalf Of ayiomamitis
Sent: Saturday, May 1, 2021 8:51 AM
Subject: Re: Takumar Pentax 6x7 tests RE: [ap-gto] [ap-ug] A colorful Southern Sky Beauty
What you describe below is precisely what one observes with cooled CCD images. With my ST-10XME, I recall thermal noise being 2-3 ADU whereas read noise was around 100 ADU (from memory). As for driving down noise by increasing the number of images in the stack, I agree that one gets limited returns as the sample size (ie stacked images) increases. Again, using my own personal experience, anything around 16 to 20 images in the stack is optimal, for additional images in the stack have a minor further effect on the S/N.
On 01-May-21 16:55, Richard Crisp wrote: