homelist of postsdocsγ ~> e-about & FAQ

An animation of a two-stage weighted CDAC

Lately the well-forgotten gif format images have resurrected again with social media and the popular website 9gag in particular. Well, this afternoon I thought, hmmm there are so many idiotic gifs online, why not make something electronics-related. Voila here it is:

A two-stage weighted cap network often used for D/A A/D conversion.

My initial intention was to try and draw moving electrons (charge) on the gif, but it somehow meant a lot more work than I initially thought. The animation here linearly increments bit switches, a bit boring I admit. But hey, I don't need to draw electrons moving, the principle is quite simple (and intelligent). With the current schematic/drawing the idea is very simple (unlike some more sophisticated charge redistribution DACs). We basically form a capacitive division, if we have a look at case 1 (LSB switch connected to Vref), then we have:

Capacitive division formed between the LSB, MSB part and the split capacitor.

We can then simply calculate the output voltage of the DAC with only LSB switch connected to $V_{ref}$ as $$V_{out} = 2V_{ref}\frac{\frac{C}{8}}{2C}$$ This split capacitance technique basically allows for total capacitor size reduction of the whole DAC as the "total weighting factor" in e.g. a tradidional CDAC here is split into two parts, which if you do the simple maths reduces cap size/area. Here is a link to the original paper from 1979.

Sadly with this my sunny day off is over!

Date:Sun Jun 06 18:37:00 CEST 2014

LEAVE A COMMENT | SEE OTHER COMMENTS

Some thoughts on 1/f noise.

Now that it's spring here in Norway, the nights are getting shorter and shorter, which on the contrary means that one wakes-up from time to time at early morning 03-04:00 a.m. due to the rising sun and in particular the birds singing around the forest.

One of the mornings last week my sleep was disturbed by a few ravens and crows measuring powers in the very prestigious annual contest "Ugly craw" organized by the local animal union committee, headed by the main moose Mr. Elg. I noticed that while the ravens were taking part in the contest, all other (smaller) types of birds continued to "applaud" all the time while the ravens were fighting. While I was half-asleep the term 1/f noise struck my mind.

Of course it is widely known that many natural phenomena follow a 1/f distribution, or in simple words the higher the power the less frequent the event would be and vice-versa. Half-asleep I did try to think and correlate the birds' songs or the species themselves. In one of the other days I decided to record some of the bird songs and perform a DFT for various recorded samples and then tried to average the f-plots to see whether there is any such 1/f dependency. The results which I show below might not be a full success due to various non-idealities and limited sample sets, but at least they hint an already known fact about 1/f and possibly show that a number of things might go wrong even with a simple time-frequency transformation as the DFT I used.

There are a number of papers about 1/f in human cognition, if interested, I suggest looking at:

"1/f noise in music and speech", Richard Voss, John Clarke, Nature vol. 258, November 27, 1975

"1/f noise in human cognition", D. Gilden, T. Thornton, Science vol. 267, March 24, 1995

"1/f noise a pedagogical review", Eduardo Milotti

To provide a picture of the 1/f occurrence in many "systems" I dare to provide a reprint from "1/f noise" Lawrence M. Ward and Priscilla E Greenwood (2007), Scholarpedia

Examples of 1/f noises occurring in many systems, source:Scholarpedia

This figure is staggering!

Let's start by hearing the bird sample I used.

You might notice that the electronic hum from the microphone preamp of my computer is simply in the same order of magnitude as the birds' songs. This makes these samples very difficult to analyse as we are interested only in the bird content and any other noise (electronic, numerical etc...) should be low. A somewhat higher dynamic range samples would have given a better start.

There is a number of ways to perform this measurement and some might argue, is it the various bird species' songs combined together from a listener's point of hearing that should all be counted for the 1/f measurement, or, is it a single bird, that should be isolated for the measurement? In the current case I possibly isolate a set of birds (bigger species) as I try to filter-out high frequencies and focus my DFT only on the range up-to 60Hz. Here is a block diagram of the signal chain for my analysis:

An overview of the audio signal chain.

After reading the input sample in wav format, a 10th order low-pass butterworth filter with a cutoff frequency of 10kHz is applied. Further after sample squaring the signal is fed through a second filter of the same type with a cutoff of 60Hz. After this extraction (with some non-idealities from the filters) the continuous-time sample information up-to 60Hz is represented in the f-domain by DFT.

Now the question arises, how large window should one have to get accurate enough DFT information for very very low frequencies 0.1Hz - 10Hz? Possibly also another question, how long sample should one have in order to be able to obtain good (with enough oversampling) 1/f plots? We know that the frequency resolution is dependent on the relationship between the input signal sampling rate and the DFT window length. In the current case we have a sampling frequency of 44.1kHz then if we collect 1024 samples (pretty standard number) for the DFT we will have a frequency bin resolution of:

This size is clearly not enough, so as a suggestion to get a 0.1Hz resolution we need about:

We can see that for such low frequencies we need a significant size of the DFT window. This on the other hand would impose that the length of our "bird song" sample must also be quite long to get some meaningful averaged 1/f plots. For instance the 441k samples relate to about 10 seconds of sample time.

Now these birds have a tendency to make quite significant pauses between their squawks sometimes even over a few minutes. If we also need to follow the basic engineering rule of thumb that in order to get any reasonable data the size of the window should be about 5 times the minimum window criteria we get 2.205 Msamples. Taking such a huge sampling window would grately degrade the temporal resolution for the analysis. We still don't care about frequencies beyond 40-60 Hz, but still. For example, if a craw has been squawking 15 times for 25 seconds and later-on during the next 25 seconds (from our sample window time) 20 times, we would still see peaks for both frequencies. The temporal resolution becomes even worse for smaller birds, which squawk even more frequently.

All these simple facts make this 1/f analysis quite subjective with the methods used, strictly speaking, a few minute only samples with low dynamic range and having to trade-off between temporal and frequency resolution. Nevertheless here are some plots of a few samples, not only "bird" content.

Some bird samples.

Some bird 1/f approximations. alpha ~= 1.3

A news emission of the Bulgarian National Radio on 17 May 2014

A news emission 1/f approximations. alpha ~= 1.9

Pink Floyd's Comfortably Numb guitar solo from the 1994 P.U.L.S.E. (The Division Bell tour) concert in Earls Court, London.

Pink Floyd's Comfortably Numb guitar solo 1/f approximations. alpha ~= 1.5

Would you call it spectral leakage, poor temporal/frequency resolution tradeoff, non-integer sampling, poor SNR of the audio samples, filter distortion due to passband ripple, the plots somehow do not look very clean. One is certain, the dependency is 1/f^alpha and alpha measures roughly between ~1.3 for the birds' songs, ~1.9 for the news emission and ~1.5 for the guitar solo.

Even injecting a 50Hz signal to the samples shows signs of some distortion from the ripple in butterworth filters, plus non-integer sampling.

50 Hz sinewave fed through the 10 and 5th order butterworth filters

You can find the octave scripts here.

All aforementioned was/is a nice exercise showing that often applied analyses and measurements require compromises which are only up to the engineer's cognition. There is no very right or sharply wrong approach in this analog world. As for the 1/f, a question arises, is this "law" also applicable to human stereotypes?

Date:Sun May 25 18:56:00 CEST 2014

LEAVE A COMMENT | SEE OTHER COMMENTS

Teaching analog design in an esoteric fashion.

Occasionally I browse through the pages of some of the Bulgarian academic and research centres focusing in the field of IC design. This week, I have been enjoying the pages of Cyril Mechkov, a teacher in analog circuit theory in the Technical University of Sofia. I am more than impressed by his methods of explaining circuits, avoiding derivation of complex transfer functions and formulas, but instead guiding the students with intuitive explanations and examples related to everyday life.

Apart from Mechkov's circuit-fantasia website, he has also uploaded all his work in wikibooks, a book called Circuit Idea. A great fantasy has struck him - trying to involve students taking his courses to actively participate in the book development and have this as a micro assignment. I feel this great work needs somehow more attention and this is partly why I am writing about him. Here is a simple illustrative example on his thoughts and ways of explaining things:

As simple phenomenon as voltage drop over a resistor is explained by Mechkov in the following way. Imagine a large water tank that is connected to smaller vessels of the same height. The water tank is full and the end of the tap is closed. Mechkov's hand-drawn diagram:

Now if one opens the far end of the pipe water will start flowing accordingly, therefore the pressures would decrease gradually according to basic hydraulic principles.

A very simple analogy could be made with a resistor and the voltage drop over it. Two analogies with voltage drop follow:

And the other way around:

The wikibook is enriched with figures following the same intuitive fashion accompanied with solid explanations and finally mathematical formulae (offtopic: oh this fancy way of writing such a simple word) covering the basic circuits in-depth.

Come to think of it, I did not find in his records an explanation of the Miller effect. Well, this is my trial for drawing an intuitive figure about the Miller effect:

At first sight this looks a rather funny way of explaining it. Our poor single manikin pulling down, whilst a bunch of other strong guys are counteracting to our single boy. So, the higher the gain (A), or transconductance (gm) in the case here, the stronger the guys would be, thus the miller cap - . Well, one should represent Cgd in another way to get a better picture, but still.

Ah well, if not for educational purposes, this might make out a good nerdy t-shirt:

Happy last six hours of the weekend :)

Date:Sun May 11 18:05:00 CEST 2014

LEAVE A COMMENT | SEE OTHER COMMENTS

Computing and the human brain / Neuromorphic Image Sensors.

Lately a colleague of mine has been working on his master thesis which involves the design of a vision sensor inspired by principles occurring in the human brain. The idea behind his and in general neuromorphic vision sensors / neuromorphic computing is brilliant, and in the same time extremely challenging to fully understand and reproduce with existing silicon VLSI technologies.

Vision sensors emulating the human retina.

Getting inspired by the principle I'll try to cover the basic idea behind human retina emulation sensors in a nutshell. I will also try to give a history de-brief of computing inspired by human brain.

OK, so what vision sensors do we use now in 2014 as a tool for transforming light into digital images? According to various data sources the dominant technology nowadays leads towards CMOS imagers.

Source: iSupply

Slipping off-track with the chart above, we can stress that probably 99% of all mass-produced machine vision sensors base on raw image data extraction and processing. While there has been a very significant progress in image recognition and a number of very successful machine vision algorithms have been invented (3D object, scene, texture recognition etc...), there is still a huge gap between the performance of the aforementioned algorithms, as compared to even the most primitive biological species as insects. One of the key differences is the way the data processing and analysis is done.

An ordinary machine vision system today would capture and process the full number of pixels and frames which a raw data vision sensor would provide. While capturing full frame data relaxes the complexity of used vision analysis algorithms, it imposes a tremendous computational and data-transfer bottleneck on the analytical device, in other words the computer. The key "feature" which biological retina stands-out from "ordinary" raw data vision sensors hides in the information capture and transfer between the imager and the processing device. I.e. a bio retina would trigger, send and process only newly entered light information into it. This key-feature avoids the further computational and data-transfer bottleneck. In a nutshell a bio-inspired vision sensor would send information only for newly triggered pixel events.

Going back to my colleague's vision sensor and the very basic principle of operation of integrate-and-fire neuron pixels. Instead of reading-out an absolute voltage level, bio-inspired integrate-and-fire neuron pixels would instead generate a trigger event with change of light intensity. Here is a very primitive example of such an event generating (spiking) circuit.

Axon-hillock circuit as described by C.A. Mead in Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

The input current coming from the photo diode pixel is integrated over a capacitor Cm, as the integrated voltage on the top plate of Cm increases until it reached the threshold level of the CMOS inverter-based buffer. When the buffer switches the output voltage changes to Vdd switching-on the reset transistor. The positive feedback capacitor is Cf forms a capacitive divider with Cm and acts as a pulse sharper. By controlling the bias voltage on the reset transistor branch one can control the sensitivity of the integrator and the duration and number of the spikes. Here is a link to Mead's original publication.

OK, so what do we do with all these spiking pixels? The second part of the puzzle hides in a tracking system that senses and takes decisions on which pixels generate spikes and which did faster. There seem to be a number of various architectures but all of them base on WTA circuits as well as Address Event Representation (AER) protocols.

Having determined the spatial location of the events, the newly generated scene information can be further supplied to a machine vision processing system. A comprehensive paper by Giacomo Indiveri et. al. is Neuromorphic silicon neuron circuits.

Here are some pictures of my visit to Lukasz's lab and his neuromorphic vision sensor.

The camera setup.
Front-side lens.
Overview of the testbench. The scope was hooked-up to a testbus measuring a test-pixel's output.
Coherent red LEDs used as a light source.
A live view, the output from the imager is noisy due to wrong pixel bias voltages.
The testbench from a slightly different angle.

Some history de-brief on neuromorphic computing.

Possibly one of the first elaborate publications on bio-inspired computing tracks back to 1958 and John von Neumann's last book entitled The Computer and the Brain, where he gives diambigulations and analogies between existing computing machines and the living human brain. A general conclusion is that the brain functions in a partly analog and digital domain.

Later Carver Mead published the first ever book on neuromorphic computing Analog VLSI and neural systems. It gives a good representation of AI principles applied in analog VLSI systems.

Speaking of bio-inspired vision sensors, the first publication and bio-inspired vision sensor was reported by Misha Mahowald a student of Mead, her 1988 publication A silicon model of early visual processing describes an analog model of the first stages of retinal processing.

Well, with this my inspiring Saturday afternoon finises, hmmm... would all major electronic systems be bio-inspired one day???

Date:Tue May 03 17:05:00 CEST 2014

LEAVE A COMMENT | SEE OTHER COMMENTS

The Cryotron.

The cryotron tube utilizes a brilliant concept which was somehow forgotten in the past thirty years, well, maybe due to understandable reasons. I have been lately reading about RSFQ circuits and by accident I ran across a year 1956 paper by D. A. Buck entitled "The Cryotron - A Superconductive Computer Component". What an esoteric idea one would say, however I find it fascinating and decided to share this rare paper.

This appears to be one of the first publications elaborating more on the practical use of Cryotron tubes. The fundamental concept standing behind the operation of the Cryotron tube is explained by the Meissner effect. In a nutshell in both type I and II superconductors the strength of an externally applied magnetic field to the superconductor changes its critical temperature.

This is the effect utilized in Cryotron tubes, by applying an external magnetic field by the means of pushing current through a simple coil wrapped around the superconductor we can change its resistive state - either superconductive or not. The diagram shown in Buck's paper summarizes the basic operating regions of this element. The unique here is the achievable switch speed. According to Buck's paper and various online sources, switching speeds in the order of pico and femto seconds can be easily achieved.

Buck's paper focuses on digital circuit design with Cryotrons. In a similar fashion as the flip-flop (above) shown in his paper he has managed to build a full arithmetic unit based on Cryotron logic circuits. Further reading in the full paper.

Date:Tue Apr 22 23:31:00 CEST 2014

LEAVE A COMMENT | SEE OTHER COMMENTS