FFT Tests to Search for Filtering in Camera Raw Data
Can you tell which images are filtered (some should be obvious),
and for the filtered images, what would be the impact on
derived QE, full well capacity, gain and read noise?
Below are two image sets, all 2D FFTs of images.
Some of the FFTs are of Gaussian noise images, some filtered, some not
and some are from actual camera raw data.
All are of light frames of a flat field (actual camera data or simulated).
If raw data are filtered, what are the impacts on deriving:
- Derived signal-to-noise ratio at all levels?
- Derived Full Well Capacity (FWC)?
- Derived signal levels in photoelectrons (photons from the subject)?
- Derived gain (electrons/DN)?
- Derived Quantum Efficiency?
- Derived read noise?
- Derived dynamic range?
Set 1.
16-bit tif image of 2D FFT.
FFT analysis. Examining the center to edge and corner levels in the
FFT images can indicate if filtering is present in the image from which
the FFT was made. Except for the spike in the center, and possible
horizontal and vertical lines (illustrated in Figure 2C), the structure in
the 2D FFT images would be uniform from near center to edges and corners.
To analyze the center to edge and corners, I examine 12x12 to 15x15
pixel blocks from near center to edges and corners. Corners usually give
the largest lever arm. The ratio of the block average from near center
(low frequencies) to the corners (high frequencies) I call the FFT ratio:
the block average of the corners divided by the block average near the
center, then average for all 4 corners. If no filtering, the FFT ratio
will be essentially 1.0.
The images in set 1 are:
- 1A) Original synthetic generated unfiltered image with Gaussian noise:
FFT: ratio: 0.995
Image S/N: 999.76 / 99.93 = 10.00, Implied photons: 100.0.
- 1B) Image from 1A with filtering to reduce outliers but maintain close to
Gaussian noise. Filter threshold: 2.0 standard deviations.
FFT: ratio: 0.973
Image S/N: 999.81 / 95.12 = 10.51, Implied photons: 110.5.
- 1C) Image from 1A with filtering to reduce outliers but maintain close to
Gaussian noise. Filter threshold: 1.5 standard deviations.
FFT: ratio: 0.922
Image S/N: 1000.08 / 79.37 = 12.60, Implied photons: 158.8.
- 1D) Image from 1A with filtering to reduce outliers but maintain close to
Gaussian noise. Filter threshold: 1.8 standard deviations.
FFT: ratio: 95.17
Image S/N: 999.96 / 90.20 = 11.09, Implied photons: 122.9.
One can plot the FFT ratio as a function of signal-to-noise ratio improvement to derive
a curve of FFT ratio versus filtering level. Then use this curve to estimate
the amount camera raw data filtering. The filtering algorithm in set 1 was designed
to maintain a close to Gaussian result and so might fool some sensor analyses.
The impact of using filtered data to derive sensor parameters means that quantum efficiency,
full well capacity, gain are overestimated. For the above filtered images, the impacts are:
Image 1B impacts QE, full well capacity, and gain would be overestimated by 10%
Image 1C impacts QE, full well capacity, and gain would be overestimated by 59%
Image 1D impacts QE, full well capacity, and gain would be overestimated by 23%
Here are the images used to make the 2D FFT in set 1 above:
16-bit tif image used in Set 1A
16-bit tif image used in Set 1B
16-bit tif image used in Set 1C
16-bit tif image used in Set 1D
Now lets look at a mix of more synthetic data and real camera data.
Set 2.
16-bit tif image of 2D FFT.
- 2A) Original synthetic generated unfiltered image with Gaussian noise.
- 2B) Canon 7D2 raw file demosaicked by imagesplus to linear tif.
Note that the Bayer de-mosaicking results in some noise reduction.
- 2C) Canon 7D2 raw file demosaicked by Adobe ACR with luminance noise reduction = 23.
The luminance noise reduction appears to be performed on linear data and
results in a little over 2x noise reduction over the luminance noise reduction
set to 0. Note that combined with the noise reduction from Bayer de-mosaicking,
there is over 2x improvement in noise and thus over a stop improvement in dynamic range.
- 2D) Original unfiltered image with no noise, just 16-bit integer, vignette 10% center to edge.
This is a synthetic image. The light fall-off causes the bright cross in the 2D FFT.
This image was generated to show the effects of light fall off in case some data had
a non-uniform light distribution on the sensor. The noise in the
2D FFT image is due to quantization to 14-bits. Note: any gradient will cause the lines.
If there is no gradient, then there would be only the high point in the center, indicative
of the image intensity. The original image needs to be evaluated for gradients
before concluding filtering is present.
- 2E) image from 2D with noise added (S/N = 10).
- 2F) Canon 7D2 rawdigger extracted green pixels, linear 14 bit unscaled.
FFT: ratio: 0.989 Estimate impact on gain, full well capacity: 3% too high (basically, this
is the likely noise level in the determination, so little if any filtering is being done).
- 2G) Nikon D810a rawdigger extracted green pixels, linear 14 bit unscaled.
FFT: ratio: .974 Estimate impact on gain, full well capacity: 10% too high
- 2H) Nikon D800 rawdigger extracted green pixels, linear 14 bit unscaled.
FFT: ratio: .975 Estimate impact on gain, full well capacity: 10% too high
Raw data for the camera data in set 2:
Canon 7D Mark 2 raw data for the green channel (10.1 MBytes)
Nikon D800e raw data for the green channel (18.2 MBytes)
Nikon D810a raw data for the green channel (18.2 MBytes)
Procedure for FFT Analysis of Camera Raw Data
- 1) Extract the linear data from a camera raw file with no scaling or Bayer demosaicking.
- 2) Separate the raw data pixel into the separate color channels, one image for each color, e.g.
red, green, blue, green2. For the camera raw data images in set 2, I used rawdigger
and saved the images as 16-bit tiffs.
- 3) Crop the central 200 x 200 pixel from each color image.
- 4) Compute the 2D FFT images on the 200 x 200 pixel crops for each color channel.
- 5) Compute the FFT ratio as previously discussed. Compare the result with known FFT ratios
on filtered images to determine plausible filtering levels, if any.
Conclusions
Say a camera manufacturer was filtering the raw data. Say the filtering
produced excellent images, which is the goal of the camera manufacturers.
They do not give quantum efficiency, full well capacity, etc., so there is
no cheating here. Reviewers are the ones deriving such sensor parameters
and making camera scores. If the reviewers do not detect the filtering,
then derived sensor parameters will be off. If, for example, you were
looking for cameras with high full well capacity, if the filtering was
not detected, the derived parameters are off and your would be comparing
differing filtering levels (which you can also do in post processing),
and not true sensor performance.
First Published updated January 20, 2017.
Last updated January 21, 2017