Now that it's spring here in Norway, the nights are getting shorter and shorter, which on the contrary means that one wakes-up from time to time at early morning 03-04:00 a.m. due to the rising sun and in particular the birds singing around the forest.
One of the mornings last week my sleep was disturbed by a few ravens and crows measuring powers in the very prestigious annual contest "Ugly craw" organized by the local animal union committee, headed by the main moose Mr. Elg. I noticed that while the ravens were taking part in the contest, all other (smaller) types of birds continued to "applaud" all the time while the ravens were fighting. While I was half-asleep the term 1/f noise struck my mind.
Of course it is widely known that many natural phenomena follow a 1/f distribution, or in simple words the higher the power the less frequent the event would be and vice-versa. Half-asleep I did try to think and correlate the birds' songs or the species themselves. In one of the other days I decided to record some of the bird songs and perform a DFT for various recorded samples and then tried to average the f-plots to see whether there is any such 1/f dependency. The results which I show below might not be a full success due to various non-idealities and limited sample sets, but at least they hint an already known fact about 1/f and possibly show that a number of things might go wrong even with a simple time-frequency transformation as the DFT I used.
There are a number of papers about 1/f in human cognition, if interested, I suggest looking at:
"1/f noise in music and speech", Richard Voss, John Clarke, Nature vol. 258, November 27, 1975
"1/f noise in human cognition", D. Gilden, T. Thornton, Science vol. 267, March 24, 1995
"1/f noise a pedagogical review", Eduardo Milotti
To provide a picture of the 1/f occurrence in many "systems" I dare to provide a reprint from "1/f noise" Lawrence M. Ward and Priscilla E Greenwood (2007), Scholarpedia
This figure is staggering!
Let's start by hearing the bird sample I used.
You might notice that the electronic hum from the microphone preamp of my computer is simply in the same order of magnitude as the birds' songs. This makes these samples very difficult to analyse as we are interested only in the bird content and any other noise (electronic, numerical etc...) should be low. A somewhat higher dynamic range samples would have given a better start.
There is a number of ways to perform this measurement and some might argue, is it the various bird species' songs combined together from a listener's point of hearing that should all be counted for the 1/f measurement, or, is it a single bird, that should be isolated for the measurement? In the current case I possibly isolate a set of birds (bigger species) as I try to filter-out high frequencies and focus my DFT only on the range up-to 60Hz. Here is a block diagram of the signal chain for my analysis:
After reading the input sample in wav format, a 10th order low-pass butterworth filter with a cutoff frequency of 10kHz is applied. Further after sample squaring the signal is fed through a second filter of the same type with a cutoff of 60Hz. After this extraction (with some non-idealities from the filters) the continuous-time sample information up-to 60Hz is represented in the f-domain by DFT.
Now the question arises, how large window should one have to get accurate enough DFT information for very very low frequencies 0.1Hz - 10Hz? Possibly also another question, how long sample should one have in order to be able to obtain good (with enough oversampling) 1/f plots? We know that the frequency resolution is dependent on the relationship between the input signal sampling rate and the DFT window length. In the current case we have a sampling frequency of 44.1kHz then if we collect 1024 samples (pretty standard number) for the DFT we will have a frequency bin resolution of:
This size is clearly not enough, so as a suggestion to get a 0.1Hz resolution we need about:
We can see that for such low frequencies we need a significant size of the DFT window. This on the other hand would impose that the length of our "bird song" sample must also be quite long to get some meaningful averaged 1/f plots. For instance the 441k samples relate to about 10 seconds of sample time.
Now these birds have a tendency to make quite significant pauses between their squawks sometimes even over a few minutes. If we also need to follow the basic engineering rule of thumb that in order to get any reasonable data the size of the window should be about 5 times the minimum window criteria we get 2.205 Msamples. Taking such a huge sampling window would grately degrade the temporal resolution for the analysis. We still don't care about frequencies beyond 40-60 Hz, but still. For example, if a craw has been squawking 15 times for 25 seconds and later-on during the next 25 seconds (from our sample window time) 20 times, we would still see peaks for both frequencies. The temporal resolution becomes even worse for smaller birds, which squawk even more frequently.
All these simple facts make this 1/f analysis quite subjective with the methods used, strictly speaking, a few minute only samples with low dynamic range and having to trade-off between temporal and frequency resolution. Nevertheless here are some plots of a few samples, not only "bird" content.
Some bird samples.
Pink Floyd's Comfortably Numb guitar solo from the 1994 P.U.L.S.E. (The Division Bell tour) concert in Earls Court, London.
Would you call it spectral leakage, poor temporal/frequency resolution tradeoff, non-integer sampling, poor SNR of the audio samples, filter distortion due to passband ripple, the plots somehow do not look very clean. One is certain, the dependency is 1/f^alpha and alpha measures roughly between ~1.3 for the birds' songs, ~1.9 for the news emission and ~1.5 for the guitar solo.
Even injecting a 50Hz signal to the samples shows signs of some distortion from the ripple in butterworth filters, plus non-integer sampling.
You can find the octave scripts here.
All aforementioned was/is a nice exercise showing that often applied analyses and measurements require compromises which are only up to the engineer's cognition. There is no very right or sharply wrong approach in this analog world. As for the 1/f, a question arises, is this "law" also applicable to human stereotypes?
Occasionally I browse through the pages of some of the Bulgarian academic and research centres focusing in the field of IC design. This week, I have been enjoying the pages of Cyril Mechkov, a teacher in analog circuit theory in the Technical University of Sofia. I am more than impressed by his methods of explaining circuits, avoiding derivation of complex transfer functions and formulas, but instead guiding the students with intuitive explanations and examples related to everyday life.
Apart from Mechkov's circuit-fantasia website, he has also uploaded all his work in wikibooks, a book called Circuit Idea. A great fantasy has struck him - trying to involve students taking his courses to actively participate in the book development and have this as a micro assignment. I feel this great work needs somehow more attention and this is partly why I am writing about him. Here is a simple illustrative example on his thoughts and ways of explaining things:
As simple phenomenon as voltage drop over a resistor is explained by Mechkov in the following way. Imagine a large water tank that is connected to smaller vessels of the same height. The water tank is full and the end of the tap is closed. Mechkov's hand-drawn diagram:
Now if one opens the far end of the pipe water will start flowing accordingly, therefore the pressures would decrease gradually according to basic hydraulic principles.
A very simple analogy could be made with a resistor and the voltage drop over it. Two analogies with voltage drop follow:
And the other way around:
The wikibook is enriched with figures following the same intuitive fashion accompanied with solid explanations and finally mathematical formulae (offtopic: oh this fancy way of writing such a simple word) covering the basic circuits in-depth.
Come to think of it, I did not find in his records an explanation of the Miller effect. Well, this is my trial for drawing an intuitive figure about the Miller effect:
At first sight this looks a rather funny way of explaining it. Our poor single manikin pulling down, whilst a bunch of other strong guys are counteracting to our single boy. So, the higher the gain (A), or transconductance (gm) in the case here, the stronger the guys would be, thus the miller cap - . Well, one should represent Cgd in another way to get a better picture, but still.
Ah well, if not for educational purposes, this might make out a good nerdy t-shirt:
Happy last six hours of the weekend :)
Lately a colleague of mine has been working on his master thesis which involves the design of a vision sensor inspired by principles occurring in the human brain. The idea behind his and in general neuromorphic vision sensors / neuromorphic computing is brilliant, and in the same time extremely challenging to fully understand and reproduce with existing silicon VLSI technologies.Vision sensors emulating the human retina.
Getting inspired by the principle I'll try to cover the basic idea behind human retina emulation sensors in a nutshell. I will also try to give a history de-brief of computing inspired by human brain.
OK, so what vision sensors do we use now in 2014 as a tool for transforming light into digital images? According to various data sources the dominant technology nowadays leads towards CMOS imagers.
Slipping off-track with the chart above, we can stress that probably 99% of all mass-produced machine vision sensors base on raw image data extraction and processing. While there has been a very significant progress in image recognition and a number of very successful machine vision algorithms have been invented (3D object, scene, texture recognition etc...), there is still a huge gap between the performance of the aforementioned algorithms, as compared to even the most primitive biological species as insects. One of the key differences is the way the data processing and analysis is done.
An ordinary machine vision system today would capture and process the full number of pixels and frames which a raw data vision sensor would provide. While capturing full frame data relaxes the complexity of used vision analysis algorithms, it imposes a tremendous computational and data-transfer bottleneck on the analytical device, in other words the computer. The key "feature" which biological retina stands-out from "ordinary" raw data vision sensors hides in the information capture and transfer between the imager and the processing device. I.e. a bio retina would trigger, send and process only newly entered light information into it. This key-feature avoids the further computational and data-transfer bottleneck. In a nutshell a bio-inspired vision sensor would send information only for newly triggered pixel events.
Going back to my colleague's vision sensor and the very basic principle of operation of integrate-and-fire neuron pixels. Instead of reading-out an absolute voltage level, bio-inspired integrate-and-fire neuron pixels would instead generate a trigger event with change of light intensity. Here is a very primitive example of such an event generating (spiking) circuit.
The input current coming from the photo diode pixel is integrated over a capacitor Cm, as the integrated voltage on the top plate of Cm increases until it reached the threshold level of the CMOS inverter-based buffer. When the buffer switches the output voltage changes to Vdd switching-on the reset transistor. The positive feedback capacitor is Cf forms a capacitive divider with Cm and acts as a pulse sharper. By controlling the bias voltage on the reset transistor branch one can control the sensitivity of the integrator and the duration and number of the spikes. Here is a link to Mead's original publication.
OK, so what do we do with all these spiking pixels? The second part of the puzzle hides in a tracking system that senses and takes decisions on which pixels generate spikes and which did faster. There seem to be a number of various architectures but all of them base on WTA circuits as well as Address Event Representation (AER) protocols.
Having determined the spatial location of the events, the newly generated scene information can be further supplied to a machine vision processing system. A comprehensive paper by Giacomo Indiveri et. al. is Neuromorphic silicon neuron circuits.
Here are some pictures of my visit to Lukasz's lab and his neuromorphic vision sensor.
Some history de-brief on neuromorphic computing.
Possibly one of the first elaborate publications on bio-inspired computing tracks back to 1958 and John von Neumann's last book entitled The Computer and the Brain, where he gives diambigulations and analogies between existing computing machines and the living human brain. A general conclusion is that the brain functions in a partly analog and digital domain.
Speaking of bio-inspired vision sensors, the first publication and bio-inspired vision sensor was reported by Misha Mahowald a student of Mead, her 1988 publication A silicon model of early visual processing describes an analog model of the first stages of retinal processing.
Well, with this my inspiring Saturday afternoon finises, hmmm... would all major electronic systems be bio-inspired one day???
The cryotron tube utilizes a brilliant concept which was somehow forgotten in the past thirty years, well, maybe due to understandable reasons. I have been lately reading about RSFQ circuits and by accident I ran across a year 1956 paper by D. A. Buck entitled "The Cryotron - A Superconductive Computer Component". What an esoteric idea one would say, however I find it fascinating and decided to share this rare paper.
This appears to be one of the first publications elaborating more on the practical use of Cryotron tubes. The fundamental concept standing behind the operation of the Cryotron tube is explained by the Meissner effect. In a nutshell in both type I and II superconductors the strength of an externally applied magnetic field to the superconductor changes its critical temperature.
This is the effect utilized in Cryotron tubes, by applying an external magnetic field by the means of pushing current through a simple coil wrapped around the superconductor we can change its resistive state - either superconductive or not. The diagram shown in Buck's paper summarizes the basic operating regions of this element. The unique here is the achievable switch speed. According to Buck's paper and various online sources, switching speeds in the order of pico and femto seconds can be easily achieved.
Buck's paper focuses on digital circuit design with Cryotrons. In a similar fashion as the flip-flop (above) shown in his paper he has managed to build a full arithmetic unit based on Cryotron logic circuits. Further reading in the full paper.
This topic has probably been hot for as long as conscious circuit design has been exsistent. I personally have had various discussions with colleagues and friends, and I have been changing my opinion about circuit drawings for a number of times throughout the past ten years. Randomly browsing online, I stumbled upon this:
IEC-61082-1 INTERNATIONAL STANDARD - Preparation of documents used in electrotechnology
Such a wonderful useless document, which also costs 310 CHF which today is equivalent of ~254 EUR, whaaaat? I admit probably the 310 CHF is the fact that makes it totally useless. This is a document which is supposed to be a primer, an example of how one should document circuit designs, so that the latter are readable by others. I assimilate this document as e.g. an English-English or any other language dictionary, how can you learn and follow one language, when you have closed resources?
If we disregard my anger that this rag costs a fortune, let's have a brief look at it, or at least focus only on how in accodrance to IEC-61082-1 one should draw nets and junctions. Luckily someone has uploaded a 2002 draft version of this document in the semi-disgustingly-pirate website Sribrbrbrrdddd.
Let's zoom into the junction and wire crossing problem, which probably forms the "prettiness" of a circuit. Or at least if one follows one junction and wire crossing rule in his schematics "everything" tends to go well.
Reading further in the document we see that wire crossings sould be done at 90 degree angle and the hopping-over style according to this standard is considered as wrong. In many aspects I can see why it is considered as a bad practice, as if one has many wire crossings the hopping-over style tends to mess-up the diagram. Just a guess about where the hopping-over style comes from is that (maybe) back in the old days when one had to use pens (possibly fountain too) and ink, it is kind of easy to draw a joint unintentionally just by slowing down when drawing you line, or in general shaking your hand, you get what I mean. So, based on the 61081-1 draft here's a summary of the junction and wire crossings:
In practice - so far I have seen an infinite number of tastes when it comes to schematic diagrams. Probably the most important rule to have in mind is not really following the standards, but to be consistent in how you draw. If we disregard junctions and wires, there come the symbols, and these may vary a lot with EDA tools and various foundry PDKs. E.g. how come the bulk of a mosfet should go out in the middle of the "transistor channel" and there should be no drain/source markings whatsoever. There IS a difference between drains and sources, that's why people have given them different names!
And at last, the famous "in accordance with standard" circuit diagram from xkcd.
I am wishing you happy Saturday's circuit designing!