The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Computing and the human brain / Neuromorphic Image Sensors.

Lately a colleague of mine has been working on his master thesis which involves the design of a vision sensor inspired by principles occurring in the human brain. The idea behind his and in general neuromorphic vision sensors / neuromorphic computing is brilliant, and in the same time extremely challenging to fully understand and reproduce with existing silicon VLSI technologies.

Vision sensors emulating the human retina.

Getting inspired by the principle I'll try to cover the basic idea behind human retina emulation sensors in a nutshell. I will also try to give a history de-brief of computing inspired by human brain.

OK, so what vision sensors do we use now in 2014 as a tool for transforming light into digital images? According to various data sources the dominant technology nowadays leads towards CMOS imagers.

Source: iSupply

Slipping off-track with the chart above, we can stress that probably 99% of all mass-produced machine vision sensors base on raw image data extraction and processing. While there has been a very significant progress in image recognition and a number of very successful machine vision algorithms have been invented (3D object, scene, texture recognition etc...), there is still a huge gap between the performance of the aforementioned algorithms, as compared to even the most primitive biological species as insects. One of the key differences is the way the data processing and analysis is done.

An ordinary machine vision system today would capture and process the full number of pixels and frames which a raw data vision sensor would provide. While capturing full frame data relaxes the complexity of used vision analysis algorithms, it imposes a tremendous computational and data-transfer bottleneck on the analytical device, in other words the computer. The key "feature" which biological retina stands-out from "ordinary" raw data vision sensors hides in the information capture and transfer between the imager and the processing device. I.e. a bio retina would trigger, send and process only newly entered light information into it. This key-feature avoids the further computational and data-transfer bottleneck. In a nutshell a bio-inspired vision sensor would send information only for newly triggered pixel events.

Going back to my colleague's vision sensor and the very basic principle of operation of integrate-and-fire neuron pixels. Instead of reading-out an absolute voltage level, bio-inspired integrate-and-fire neuron pixels would instead generate a trigger event with change of light intensity. Here is a very primitive example of such an event generating (spiking) circuit.

Axon-hillock circuit as described by C.A. Mead in Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

The input current coming from the photo diode pixel is integrated over a capacitor Cm, as the integrated voltage on the top plate of Cm increases until it reached the threshold level of the CMOS inverter-based buffer. When the buffer switches the output voltage changes to Vdd switching-on the reset transistor. The positive feedback capacitor is Cf forms a capacitive divider with Cm and acts as a pulse sharper. By controlling the bias voltage on the reset transistor branch one can control the sensitivity of the integrator and the duration and number of the spikes. Here is a link to Mead's original publication.

OK, so what do we do with all these spiking pixels? The second part of the puzzle hides in a tracking system that senses and takes decisions on which pixels generate spikes and which did faster. There seem to be a number of various architectures but all of them base on WTA circuits as well as Address Event Representation (AER) protocols.

Having determined the spatial location of the events, the newly generated scene information can be further supplied to a machine vision processing system. A comprehensive paper by Giacomo Indiveri et. al. is Neuromorphic silicon neuron circuits.

Here are some pictures of my visit to Lukasz's lab and his neuromorphic vision sensor.

The camera setup.
Front-side lens.
Overview of the testbench. The scope was hooked-up to a testbus measuring a test-pixel's output.
Coherent red LEDs used as a light source.
A live view, the output from the imager is noisy due to wrong pixel bias voltages.
The testbench from a slightly different angle.

Some history de-brief on neuromorphic computing.

Possibly one of the first elaborate publications on bio-inspired computing tracks back to 1958 and John von Neumann's last book entitled The Computer and the Brain, where he gives diambigulations and analogies between existing computing machines and the living human brain. A general conclusion is that the brain functions in a partly analog and digital domain.

Later Carver Mead published the first ever book on neuromorphic computing Analog VLSI and neural systems. It gives a good representation of AI principles applied in analog VLSI systems.

Speaking of bio-inspired vision sensors, the first publication and bio-inspired vision sensor was reported by Misha Mahowald a student of Mead, her 1988 publication A silicon model of early visual processing describes an analog model of the first stages of retinal processing.

Well, with this my inspiring Saturday afternoon finises, hmmm... would all major electronic systems be bio-inspired one day???

Date:Tue May 03 17:05:00 CEST 2014

Comments

No comments yet
*Name:
Email:
Notify me about new comments on this page
Hide my email
*Text: