The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Point Spread-Function, Optical and Modulation Transfer Functions

In solid-state imaging, the obtained image contrast and spatial resolution, apart from the electrical circuit performance with respect to crosstalk and linearity, vastly depends on the spatial organization of the light sensing pixels. An image sensor can essentially be looked into as if it was a non-ideal "electrical" transducer, distorting the fed through it light information. In the general case, if we exclude all electrical complexities, an imaging system's performance can be extremely complicated for evaluation also due to the numerous second order effects accumulated along the optical signal path. This implies that we need to apply some restrictions to the degrees of freedom in our analyses, and hence, apply a suitable to the particular application divide and conquer approach. This post aims to give an introduction to the Modulation Transfer Function parameter and its implication in image sensor optical array design.

With that respect, two fundamental properties in imaging systems can be identified:

1. Linearity — just as any other black-box system, it implies that the output corresponding to a sum of inputs should also be equal to the sum of the inputs processed individually.

2. Invariance — the projected image in the spatial domain remains the same even after the imaged object is moved to another location in "space".

Linearity is usually taken care of accurate readout electronics and photodetector design. Invariance, or spatial resolution, however, greatly depends on the chosen geometrical shape of the pixels, and their arrangement in space. A common benchmark parameter in imaging systems is their Modulation Transfer Function (MTF), which is to a large extent linked with the arrangement of the array elements. In order to continue with the definition and implications of MTF, we must first have a look at what a Point Spread-Function (PSF) in optics is.

Imagine you can find a point light source (e.g. a torch) with an infinitely small aperture, which you then point to the image sensor in your mobile phone camera. What you would expect to see on the display is the same point at the same location where you pointed the beam:

Beam with an infinitely small aperture projected on the image sensor plane

Real optical systems, however, suffer from optical imperfections, which result in smearing out of the energy around the infinitely small aperture beam you pointed towards the system, hence, yielding a loss of sharpness. The Point Spread Function provides a measure of the imaging system's smearing out at a single, infinitely small, physical point in the imaging plane. In the case of a real pixel in an image sensor, its PSF can be defined as a function of its effective aperture:

Effective spatial aperture definition of a pixel/photodetector

The PSF for the region within the photodiode is equal to some constant (modulated by the quantum efficiency of the photodiode), and for the region where the pixel is covered with the metal we can assume that the PSF is zero. Although, metal layers in modern integrated circuits are so thin, that photons can still tunnel through them. Then formally:

$$ S(x) = \begin{cases} s_{o} & x_{o} - L/2 \le x \le x_{o} + L/2 \\ 0 & \text{for } x \text{ is all others} \end{cases} $$

Which in simple words represents a boxcar function.

Point Spread Function for the above pixel is similar to a boxcar function

By knowing the Point Spread Function of the array, we can estimate the global Optical Transfer Function of the whole imaging system by using a simple mathematical exercise. Note that the PSF is a spatial domain function. Just as in electronics, apart from looking at signals in the time domain, frequency domain analyses prove to be extremely representative too. This is also the case with 2D imaging and their spatial and frequency domains. The Optical and Modulation Transfer Functions are usefully represented in the frequency domain. But what exactly is OTF and MTF? Let's have a look at the physical effects they describe, by examining the following pattern:

If we feed in an ideal optical pattern as the one shown at the top, due to the limited single photodetector aperture (opening/fill factor), the reconstructed at the output image would be smeared out. Just as the torch test with an infinitely small aperture. Depending on the effective aperture size (pixel fill factor) as well as the geometrical arrangement of individual pixels, the smearing would have a different magnitude for different spatial sizes. To simplify the optical system's evaluation, this smearing can be expressed in the frequency domain, which is done by the OTF and MTF. An imaging system can be viewed of as a low-pass filter in the frequency domain, thus OTF and MTF represent the system's amplitude-frequency characteristics.

Similar to electronics, the OTF can be derived by computing the Fourier transform of the Point Spread Function with is the equivalent to the impulse response in electronic linear time-invariant systems:

$$ S(f) = \int_{-\infty}^{\infty} S(x) e^{j 2 \pi f_{x}}~dx $$

Substituting with the PSF we get the following definition:

$$ S(f) = s_{o} \int_{x_{o}-L/2}^{x_{o}+L/2} e^{j 2 \pi f_{x}}~dx \propto \frac{sin(\pi f L)}{\pi f L} $$

Simple overview of spatial vs frequency domain in optical systems

What is the difference between OTF and MTF? OTF contains a negative and imaginary part from the fourier transformation and carries also phase information. MTF is defined as the ratio of the output modulation to the input modulation as a function of the spatial frequency, but normalized. It is typically expressed as [strips/cycles]/mm. Thus, to normalize, MTF is derrived from the absolute ratio of the OTF, normalized with the OTF at zero frequency (or DC in analogy with electronics).

$$ MTF(f) = \frac{|S(f)|}{|S(f=0)|} $$

Hence, the MTF is a sinc function:

$$ MTF(f) = \frac{sin(\pi f L)}{\pi f L} $$

Various pixel array configurations exist, which directly affect the MTF coming from the sensor's aperture. It is very important to note the aperture MTF is not the only source, nor a global MTF degradation parameter. In imaging of moving objects, such as Time-Delay-Integration imaging mode, discrete (temporal) MTF from object synchronization and misalignment also occurs. In addition, the lens also add-up to the total camera system's MTF. The good news is that as the MTF is normalized and dimensionless, we can easily multiply all MTFs of all sources of degradation to identify the global MTF. Again, just as in electronics, the MTF degradation order, as it progresses is important. Identical to electrical amplifiers and the Friis formula, we should always place the highest (if possible) MTF optical part to come first along the signal chain. Unfortunately, in optics swapping parts along the signal chain is a rather more difficult task, than it is with amplifier chains.

References:

[1] K. Rossmann et al., Tools for the study of imaging systems, Technical Report, Department of Radiology, The University of Chicago and the Argonne Cancer Research Hospital, August 1969.

Date:Sun Aug 11 10:34:27 CET 2016

Comments

No comments yet
*Name:
Email:
Notify me about new comments on this page
Hide my email
*Text: