homelist of postsdocsγ ~> e-about & FAQ

Computing and the human brain / Neuromorphic Image Sensors.

Lately a colleague of mine has been working on his master thesis which involves the design of a vision sensor inspired by principles occurring in the human brain. The idea behind his and in general neuromorphic vision sensors / neuromorphic computing is brilliant, and in the same time extremely challenging to fully understand and reproduce with existing silicon VLSI technologies.

Vision sensors emulating the human retina.

Getting inspired by the principle I'll try to cover the basic idea behind human retina emulation sensors in a nutshell. I will also try to give a history de-brief of computing inspired by human brain.

OK, so what vision sensors do we use now in 2014 as a tool for transforming light into digital images? According to various data sources the dominant technology nowadays leads towards CMOS imagers.

Source: iSupply

Slipping off-track with the chart above, we can stress that probably 99% of all mass-produced machine vision sensors base on raw image data extraction and processing. While there has been a very significant progress in image recognition and a number of very successful machine vision algorithms have been invented (3D object, scene, texture recognition etc...), there is still a huge gap between the performance of the aforementioned algorithms, as compared to even the most primitive biological species as insects. One of the key differences is the way the data processing and analysis is done.

An ordinary machine vision system today would capture and process the full number of pixels and frames which a raw data vision sensor would provide. While capturing full frame data relaxes the complexity of used vision analysis algorithms, it imposes a tremendous computational and data-transfer bottleneck on the analytical device, in other words the computer. The key "feature" which biological retina stands-out from "ordinary" raw data vision sensors hides in the information capture and transfer between the imager and the processing device. I.e. a bio retina would trigger, send and process only newly entered light information into it. This key-feature avoids the further computational and data-transfer bottleneck. In a nutshell a bio-inspired vision sensor would send information only for newly triggered pixel events.

Going back to my colleague's vision sensor and the very basic principle of operation of integrate-and-fire neuron pixels. Instead of reading-out an absolute voltage level, bio-inspired integrate-and-fire neuron pixels would instead generate a trigger event with change of light intensity. Here is a very primitive example of such an event generating (spiking) circuit.

Axon-hillock circuit as described by C.A. Mead in Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

The input current coming from the photo diode pixel is integrated over a capacitor Cm, as the integrated voltage on the top plate of Cm increases until it reached the threshold level of the CMOS inverter-based buffer. When the buffer switches the output voltage changes to Vdd switching-on the reset transistor. The positive feedback capacitor is Cf forms a capacitive divider with Cm and acts as a pulse sharper. By controlling the bias voltage on the reset transistor branch one can control the sensitivity of the integrator and the duration and number of the spikes. Here is a link to Mead's original publication.

OK, so what do we do with all these spiking pixels? The second part of the puzzle hides in a tracking system that senses and takes decisions on which pixels generate spikes and which did faster. There seem to be a number of various architectures but all of them base on WTA circuits as well as Address Event Representation (AER) protocols.

Having determined the spatial location of the events, the newly generated scene information can be further supplied to a machine vision processing system. A comprehensive paper by Giacomo Indiveri et. al. is Neuromorphic silicon neuron circuits.

Here are some pictures of my visit to Lukasz's lab and his neuromorphic vision sensor.

The camera setup.
Front-side lens.
Overview of the testbench. The scope was hooked-up to a testbus measuring a test-pixel's output.
Coherent red LEDs used as a light source.
A live view, the output from the imager is noisy due to wrong pixel bias voltages.
The testbench from a slightly different angle.

Some history de-brief on neuromorphic computing.

Possibly one of the first elaborate publications on bio-inspired computing tracks back to 1958 and John von Neumann's last book entitled The Computer and the Brain, where he gives diambigulations and analogies between existing computing machines and the living human brain. A general conclusion is that the brain functions in a partly analog and digital domain.

Later Carver Mead published the first ever book on neuromorphic computing Analog VLSI and neural systems. It gives a good representation of AI principles applied in analog VLSI systems.

Speaking of bio-inspired vision sensors, the first publication and bio-inspired vision sensor was reported by Misha Mahowald a student of Mead, her 1988 publication A silicon model of early visual processing describes an analog model of the first stages of retinal processing.

Well, with this my inspiring Saturday afternoon finises, hmmm... would all major electronic systems be bio-inspired one day???

Date:Tue May 03 17:05:00 CEST 2014

The Cryotron.

The cryotron tube utilizes a brilliant concept which was somehow forgotten in the past thirty years, well, maybe due to understandable reasons. I have been lately reading about RSFQ circuits and by accident I ran across a year 1956 paper by D. A. Buck entitled "The Cryotron - A Superconductive Computer Component". What an esoteric idea one would say, however I find it fascinating and decided to share this rare paper.

This appears to be one of the first publications elaborating more on the practical use of Cryotron tubes. The fundamental concept standing behind the operation of the Cryotron tube is explained by the Meissner effect. In a nutshell in both type I and II superconductors the strength of an externally applied magnetic field to the superconductor changes its critical temperature.

This is the effect utilized in Cryotron tubes, by applying an external magnetic field by the means of pushing current through a simple coil wrapped around the superconductor we can change its resistive state - either superconductive or not. The diagram shown in Buck's paper summarizes the basic operating regions of this element. The unique here is the achievable switch speed. According to Buck's paper and various online sources, switching speeds in the order of pico and femto seconds can be easily achieved.

Buck's paper focuses on digital circuit design with Cryotrons. In a similar fashion as the flip-flop (above) shown in his paper he has managed to build a full arithmetic unit based on Cryotron logic circuits. Further reading in the full paper.

Date:Tue Apr 22 23:31:00 CEST 2014

Proper? way of drawing circuit diagarams

This topic has probably been hot for as long as conscious circuit design has been exsistent. I personally have had various discussions with colleagues and friends, and I have been changing my opinion about circuit drawings for a number of times throughout the past ten years. Randomly browsing online, I stumbled upon this:

IEC-61082-1 INTERNATIONAL STANDARD - Preparation of documents used in electrotechnology

Such a wonderful useless document, which also costs 310 CHF which today is equivalent of ~254 EUR, whaaaat? I admit probably the 310 CHF is the fact that makes it totally useless. This is a document which is supposed to be a primer, an example of how one should document circuit designs, so that the latter are readable by others. I assimilate this document as e.g. an English-English or any other language dictionary, how can you learn and follow one language, when you have closed resources?

If we disregard my anger that this rag costs a fortune, let's have a brief look at it, or at least focus only on how in accodrance to IEC-61082-1 one should draw nets and junctions. Luckily someone has uploaded a 2002 draft version of this document in the semi-disgustingly-pirate website Sribrbrbrrdddd.

Let's zoom into the junction and wire crossing problem, which probably forms the "prettiness" of a circuit. Or at least if one follows one junction and wire crossing rule in his schematics "everything" tends to go well.

Excerpt from IEC-61081-1 (draft version 2002), source:Scribd

Reading further in the document we see that wire crossings sould be done at 90 degree angle and the hopping-over style according to this standard is considered as wrong. In many aspects I can see why it is considered as a bad practice, as if one has many wire crossings the hopping-over style tends to mess-up the diagram. Just a guess about where the hopping-over style comes from is that (maybe) back in the old days when one had to use pens (possibly fountain too) and ink, it is kind of easy to draw a joint unintentionally just by slowing down when drawing you line, or in general shaking your hand, you get what I mean. So, based on the 61081-1 draft here's a summary of the junction and wire crossings:

A simplified summary of the 61081-1 year 2002 standard in terms of junctions and wire crossings

In practice - so far I have seen an infinite number of tastes when it comes to schematic diagrams. Probably the most important rule to have in mind is not really following the standards, but to be consistent in how you draw. If we disregard junctions and wires, there come the symbols, and these may vary a lot with EDA tools and various foundry PDKs. E.g. how come the bulk of a mosfet should go out in the middle of the "transistor channel" and there should be no drain/source markings whatsoever. There IS a difference between drains and sources, that's why people have given them different names!

And at last, the famous "in accordance with standard" circuit diagram from xkcd.

Electric eels save the day, source: xkcd

I am wishing you happy Saturday's circuit designing!

Date:Sun May 31 10:40:00 CEST 2014

Ironies of automation.

Now it has been lately quite some time trying to improve my page generation script, as well as having to fiddle around with some helper scripts at work that are supposed to make our life easier. By a coincidence last week I was visiting some friends in Sweden, from topic to topic, quite late in the evening I was recommended a paper title which appeared to be extremely entertaining to read.

It focuses on the ways in which automation tries to heal problems with human machine operators and the total man-hour labour efficiency when you draw the line accounting the time and complexity required to create the latter automation systems.

Date:Tue Apr 01 20:42:00 CEST 2014

A continuation of the content management script.

I have been fiddling a bit with my page content management script. So now except the pages I am also auto-generating the page bars too. As another field test I will just as well share the skeleton of my generator.

We first have the basic stuff:

 ############## EDIT THIS ###########################
OUTPUT="/media/05022eeb-bcac-4484-8eb0-1b41d4eae750/site-tex-res/site-sync/dilemaltd.com/public_html/deyan-levski/test/"
FILE="index.htm"

DIRECTORIES="\
/media/05022eeb-bcac-4484-8eb0-1b41d4eae750/site-tex-res/site-sync/dilemaltd.com/public_html/deyan-levski/test/post/ \
"
FILENAMES='post*.htm'

#################################################### 
I.e. I look through the "post" folder and the html files in it and then build-up the pages by concatinating a base skeleton and the posts e.g.:

DATE=date
echo $BASE >>$OUTPUT$FILE #declare -i cnt #declare -i p cnt=0 dobase=0 p=0 for f in$FILENAMES; do
for i in find $DIRECTORIES -type f $$-iname "*f*" ! -iname "*~" ! -iname ".*"$$ | sort -V -r ; do #flinedate=$(head -n 1 $i) #Auto file date fetch stuff, that is currently not implemented. #echo$date

cnt=expr $cnt + 1 if [ "$cnt" -lt 5 ] && [ "$p" -eq 0 ] then CATSTR=$OUTPUT$FILE cat$i >> $CATSTR elif [ "$cnt" -lt 5 ] && [ "$p" -ne 0 ] then CATSTR=$OUTPUT$p$FILE
cat $i >>$CATSTR

else
cnt=0
p=expr $p + 1 CATSTR=$OUTPUT$p$FILE
dobase=0
if [ "$dobase" -eq 0 ] then echo$BASE >> $CATSTR dobase=1 fi cat$i >> $CATSTR fi done done  Then generate the pagebar: z=0 for k in { 0..$p }; do # Generate bottom Page bar

CATSTR=$OUTPUT$z$FILE MAININDEX=$OUTPUT$FILE if [$z -ne 0 ]
then
olz=0
echo "
Page: " >> $CATSTR for r in seq 0$p;do
olz=expr $olz + 1 if [ "$olz" -eq 1 ]
then
echo "0  " >> $CATSTR else olzt=expr$olz - 1
echo "$olzt " >>$CATSTR
fi
done
echo "" >> $CATSTR echo "Last edited:$DATE by $USER" >>$CATSTR
echo "\n" >> $CATSTR else olz=0 echo " Page: " >>$MAININDEX
for r in seq 0 $p;do olz=expr$olz + 1
if [ "$olz" -eq 1 ] then echo "0 " >>$MAININDEX
else
olzt=expr $olz - 1 echo "$olzt  " >> $MAININDEX fi done echo "" >>$MAININDEX

echo "Last edited: $DATE by$USER" >> $MAININDEX echo "\n" >>$MAININDEX
fi
z=expr \$z + 1
done

The latter also completes the end of the HTML and adds Last edit tag. We can then look at all our files and upload them to the server.

 # Connect to FTP and upload htmls.
ftp -n -v www.dilemaltd.com << EOT
ascii
user xxxx
prompt off
cd public_html/deyan-levski/test/
mput *.htm
cd post
mput post/*.htm
quit
EOT


In an ideal world I can possibly extend this in the same fashion and make it a generic piece of code. If I do so, I will for sure write a proper post about it.

Date:Sun Mar 30 15:47:00 CEST 2014