^{-}▲ about & FAQ

**OxHACK 2014**

Last weekend I went to my first hackathon - OxHACK. I can honestly say that I have never been surrounded by so many computer scientists for a long time. It spent some 30ish pleasant hacking hours in a company of a three kind students.

For the given 24 hours of coding time we decided to create a tool which searches through academic papers in *.pdf format, extracts their introduction and conclusion sections. After extracting this data in ASCII, some natural language processing algorithms are applied to compress the sections in just a few sentences giving a smart summary of the work. We used an existing OCR reader written in python "texttopdf", some shell/sed processing (regexp deletion stuff) to clean up the converted ASCII text and the python Alchemy library for natural language processing for section compression. The engine was nicely wrapped into a webpage using Flask, and we also managed to use Mendeley's API to fetch pdf papers for processing from a Mendeley account.

Here is a short description of the project. It should soon be hosted on easyskim.co.uk and we also plan on developing the engine further during our spare time, if any...

Picture from left to right, Josh, Rebecca, me and Keller.

Here is a picture of the team, we also got into the top 10 best projects at this hackathon! Here is the place to once again say, it was a pleasure to work with you guys!

**TDI CCD and TDI CMOS Signal-to-Noise Ratio and Dynamic Range**

I was skimming through this paper from IEEE Transactions on VLSI Systems. In Section II A, the authors give a brief introduction on TDI CMOS image sensors and the basic SNR and DR dependence on the number of TDI stages $N$. What stoke me was their claim that the dynamic range of a CMOS TDI sensor decreases with the number of stages. While this holds for a CCD TDI image sensor, things stand completely different for a CMOS imager. This wrong statement is also my main motivation for this post.

**1. SNR and Dynamic Range (DR) for a TDI CCD Image Sensor**

In a CCD TDI imager, the pixel has to have a large full well capacitance in order to be able to hold the final accumulated output signal. Therefore, the CCD's full well has to be maximized in as to increase the dynamic range and avoid saturation/blooming problems.

Intuitively the signal-to-noise ratio would be equivalent to: $$SNR = 20log\Bigg(\frac{\sqrt{N}(i_{ph}t_{s})}{\sigma_{total}}\Bigg)$$ and the dynamic range of a CCD TDI sensor would be: $$ DR = 20log\Bigg(\frac{q_{fwmax}-(N i_{dc}t_{s})}{\sqrt{N}\sigma_{total}}\Bigg) $$ Therefore the DR would be limited by the charge handling capacity of the output CCD channel, which means that with a CCD we would like to have as large full well as possivle. One might observe that in the case of CCDs, the dynamic range would indeed be reduced with the increase of the number of stages due to the total noise addition at each stage. This however is not the case with a CMOS image sensor as we shall see soon.

**2. SNR and Dynamic Range (DR) for a TDI CMOS Image Sensor**

In a TDI CMOS sensor unlike CCDs, the pixel has to be designed only to handle the maximum full well capacity needed for the expected operating integration time per line.

As the integration time is low, the FW capacity can be low and the pixel constructed with high conversion gain. This also reduces noise (maximizes signal swing) and in some sense is relaxing the readout noise requirement, bar the fact that TDI in CMOS imposes tough (very low) noise requirements on the readout as the readout noise adds with the square root of the number time delay integration stages.

The signal to noise ratio as well as the dynamic range for a CMOS TDI sensor would therefore be: $$ SNR = 20log\Bigg(\frac{\sqrt{N}(i_{ph}t_{s})}{\sigma_{total}}\Bigg) $$ $$ DR = 20log\Bigg(\frac{N q_{fwmax}-(N i_{dc}t_{s})}{\sqrt{N}\sigma_{total}}\Bigg) $$ The total full well capacity equals to the sum of the separate pixel full wells $FW_{tot} = \sum\limits_{i=1}^n FW_{pix_n}$, so the total full well is practically only limited by the accummulators, which can be either implemented in the analog or digital domain. With digital accummulators the total FW can in general have no limits.

**MTF and TDI CIS**

I have been recently reading about TDI imagers and their fundamental limitations. A TDI (time delay integration) image sensor effectively performs multiple exposures of the same moving object and accumulates them later on. The aim is to increase the time available for integration of the same object spot and effectively boost the sensor's sensitivity and/or frame rate. Such sensors are typically realized in a large aspect ratio format, normally as line scanners. More about TDIs here.

One very specific and well known issue with TDI imagers is their poor contrast performance. This comes from the fact that when a moving object is captured by static orthogonally placed pixels in e.g. a rolling shutter CMOS image sensor, one can not capture the same object's spots with the same pixels. Here is a diagram of a four-line rolling shutter sensor.

In other words when the rolling shutter is triggered (e.g. left-right), the effective sampling aperture of the sensor depends on the sampling period of the adjacent pixels and the pixel line time. The sampling aperture would therefore affect the dynamic modulation transfer function of the image sensor.Lepage et. al. have an excellent publication in IEEE Transactions on Electron Devices on this problem.

I played numerically with the formulas to see how the number of stages in a TDI sensor would affect its dynamic MTF. The modulation transfer function can be computed by performing a 1D Fourier transform of the sensors' spread function, in the current case due to finite discrete sampling aperture: $$MTF_{discrete} = \frac{\sin(\frac{1}{2}f_{nyq}\pi\frac{t_{int}}{t_{line}})}{\frac{1}{2}f_{nyq}\pi\frac{t_{int}}{t_{line}}}$$ One should also note that the sensor's total MTF would also be affected by the pixel aperture, crosstalk, alignment and is a product of the latter. Below a plot of the dynamic MTF versus the normalised spatial frequency for different effective sampling apertures is shown.

Note at the aliasing peaks beyond the Nyquist frequency indicated with a vertical blue line. We can see that at its best (for a standard orthogonal rolling shutter scanner), if we have a single accummulation the MTF at $f_{n}/2$ is 0.64. As the total MTF of the imager depends also on the pixel's MTF, one can achieve a better total MTF by tweaking the pixel aperture design for e.g. adding some light shields etc... This however degrades pixel QE and therefore gives a loss of SNR.

**Bode plots with an oscilloscope**

Our group needed a good microphone for our weekly conference calls as a part of us are in Portugal and another in Glasgow. The >100 GBP camera microphone which we had did not provide satisfactory results, none at all. This is the reason why I decided to build a microphone preamplifier myself, which supposedly had to perform better and replace the camera mic. The design and soldering process however, provoked an idea for a more esoteric measurement setup.

I saw the idea implemented originally by Dave Jones in his EEVBlog. I had a look at his video again but a 25 minute ranting video seemed way too long for me to watch, so I decided to squeeze the general idea into five minutes. Here is a basic explanation and demonstration of how to do a Bode plot with a scope.

The circuit I used for the test is a non-inverting amplifier, realized with an opamp (TL072) with having an f-dependent feedback, effectively forming a high-pass filter. Practically we need some sort of DC reject filter as to avoid opamp saturation. Here is my circuit:

And two screenshots of a linear and a log sweep:

There is some noise in the system and performing averating to filter out the noise is not an option here as one can never achieve the same phase on every sweep/acquisition. Turning on averaging just in my case just distorted the picture even more.

It would be fun to find out how to do a phase plot on the scope, so that a full picture of the transfer characteristics of our circuit can be acquired.

**Simple motion detection with OpenCV**

As a continuation of my previous post, here is a simple algorithm for motion detection in a live video stream using OpenCV. It basically follows a few simple steps:

1. Subtract frames and generate a binary image

cvAbsDiff( frameTime1, frameTime2, frameForeground ); cvShowImage( "AbsDiff", frameForeground); //AbsDiff window cvThreshold( frameForeground, frameForeground, 20, //Threshold 255, //Saturate up to 255 CV_THRESH_BINARY); //CV_THRESH_BINARY_INV); //CV_THRESH_TRUNC); //CV_THRESH_TOZERO); //CV_THRESH_TOZERO_INV); // cvShowImage( "AbsDiffThreshold", frameForeground); //AbsDiffThreshold window

The threshold as one may guess, can be used as a primitive noise supression parameter.

2. Run through the binary image and accumulate events.

int row, col; uchar sig1, sig2; unsigned long int rowsum[frameForeground->height], totalsum; totalsum = 0; for( row = 0; row < frameForeground->height; row++ ) { rowsum[row] = 0; for ( col = 0; col < frameForeground->width; col++ ) { sig1 = CV_IMAGE_ELEM( frameForeground, uchar, row, col * 2 ); sig2 = CV_IMAGE_ELEM( frameForeground, uchar, row, col * 2 + 1 ); rowsum[row] += (sig1 + sig2); //printf("Y: %d X: %d Val: %d \n", row, col, sig1); //printf("Y: %d X: %d Val: %d \n", row, col, sig2); } totalsum += rowsum[row]; } printf("Totalsum: %20d \n", totalsum); if (totalsum >= 80000) // Motion detection threshold { cvRectangle(image,cvPoint(10,10), cvPoint(310, 230),cvScalar(0, 255, 0, 0),1,8,0); // Draw a green rectangle } cvShowImage( "Camera", image ); //Display the original image w/wo added green rectangle

After all event accumulation, a comparison with a motion detection coefficient is made. Once the total event count is larger than the detection coefficient a green rectangle is embedded on the original frame.

Here is my dead simple example, which should work straight out of the box.