The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocs about & FAQ

Baudot/Murray/ITA2 codes

This post is a proof that losing internet for some time can in fact boost your productivity. Or at least it made me go back to an old and well forgotten project of mine. Back in ?2005? I was very excited about acquiring an MW or HW band transciever and trying get in contact with some radio amateurs. Unfortunately I only ended up experimenting with my home-made pirate FM 88-108 MHz transmitter hooking it up to my PC's sound card and using some radio amateur PSK bursting software to transmit text to another PC, which's sound card was connected to an FM receiver. It did actually work pretty well, tranferring data at about 300~600 baud.

As I do not have internet for a while now, I have decided to spend some time on home-made FSK encoding program. I know there is a plenty of choice when it comes to radio amateur software, but this helped me to slightly rub-off my rust in C and have some fun too.

Baudot/Murray/ITA2 codes, also commonly referred to as five-unit codes are practically the second evolution of the Morse code and were extensively used back in the old days in telegrapy, systems also known as TELEX, machine programming/execution punched cards and many other ingenious well forgotten systems nowadays. Basically a five-bit code is used for alphabet encoding. Apart from the standard (capital letter only) alphabet characters, these systems include also some special symbols as, carriage return, line feed, blank etc...

This page is a good source giving a brief overview of the ITA2 system, however for convenience I am listing the ITA2 (International Telegraph Agency 2) encoding table I used here:

Character 5-bit encoding Character 5-bit encoding
A 11000 B 10011
C 01110 D 10010
E 10000 F 10110
G 01011 H 00101
I 01100 J 11010
K 11110 L 01001
M 00111 N 00110
O 00011 P 01101
Q 11101 R 01010
S 10100 T 00001
U 11100 V 01111
W 11001 X 10111
Y 10101 Z 10001
CR 00010 SPACE 00100
Parts of the ITA2 encoding standard.

In order to be able to distinguish between the character bit-bursts however the teletype systems use start and stop bit sequences. I found various sources online, which suggest different bit sequences. The ones I have used are based on the information provided by this website. As I am not going to contact anyone I didn't really care so much about that. The following image shows a single character burst including start and stop bits. In my program the used start bit is "zero" and as a stop bit I use "one".

Baudot RTTY burst example, as shown in a number of radio-amateur websites.

The most common transmission scheme in the HF amateur bands for RTTY (Radio Teletype) is FSK. This is practically what all the amateur software encpding programs do, read text and "play" FSK bursts encoded in Baudot, when speaking about RTTY of course :). This is what my rusty program also does. Hear some burst examples:

Transmitting the string "I WILL BE BACK HOME SOON PLEASE LEAVE SOME WATER MELONS PEACHES APPRICOTS PLUMS AND GRAPES FOR ME TOO" using 10ms bit bursts with non-integer related shift frequencies (notice the hum/harmonics)
Transmitting the same string but with integer-related shift keying frequencies.
Transmitting the same string but at 100ms burst time.

You might hear the odd low frequency hum in one of the very short burst time samples. This apparently is caused by incorrect stitching of the dual tone sinewaves and can be fixed with proper tuning of the burst time and choice of frequencies such that no "odd stitching effect" appears anymore. Here is what "d-effect" I am referring to:

Sinewave "stitching d-effect" causing discontinuity and therefore harmonics and hum.

This can be fixed by choosing shift keying frequencies together with burst lengths which are integer numbers, for example, $$\frac{f_{FSK_{1}}}{f_{FSK_{2}}} = int$$

Stitching problem solved by using integer proportion shift keying sines, note there is still an issue.

There is still an inssue with my algorithm, I need to delete the last sample of the FSK sine before stitching the new sine, as now there are two repeating samples, which ideally should be one.

I am posting the source as well as the compiled program here, however I should warn you that the code is not very tidy nor efficient. I am basically generating a PCM file, which I then open with and listen to with audacity. I found this to be simplest and most painless format to use.

When importing the PCM you should use signed 16-bit mono, little-endian bit order settings. You can define the sampling frequency from the program, the default used in the precompiled program is 44.1 kHz, however you can choose your own, you will see also some self-explanatory definitions for ZERO_FREQUENCY, ONE_FREQUENCY, SAMPLING_FREQUENCY, BURST_LENGTH etc... The program is reading from a text file named xmit.txt and treats carriage returns as blank space. The file length is also controlled by the MAXTEXTLEN definition.

Link to ttygen and the source code.

Now a question arises, why is it always easier to build a transmitter than a receiver?. In pretty much every aspect in electronics, when you have to deal with transmitters/receivers it is almost always the case that the transmitter is easier to build. Now I have a new project idea, let's write some code which "receives" and decodes these generated transmission sequences ...

Maybe the next time my internet connection drops.

Date:Sun Jul 05 23:02:00 CEST 2014


An animation of a two-stage weighted CDAC

Lately the well-forgotten gif format images have resurrected again with social media and the popular website 9gag in particular. Well, this afternoon I thought, hmmm there are so many idiotic gifs online, why not make something electronics-related. Voila here it is:

A two-stage weighted cap network often used for D/A A/D conversion.

My initial intention was to try and draw moving electrons (charge) on the gif, but it somehow meant a lot more work than I initially thought. The animation here linearly increments bit switches, a bit boring I admit. But hey, I don't need to draw electrons moving, the principle is quite simple (and intelligent). With the current schematic/drawing the idea is very simple (unlike some more sophisticated charge redistribution DACs). We basically form a capacitive division, if we have a look at case 1 (LSB switch connected to Vref), then we have:

Capacitive division formed between the LSB, MSB part and the split capacitor.

We can then simply calculate the output voltage of the DAC with only LSB switch connected to $V_{ref}$ as $$V_{out} = 2V_{ref}\frac{\frac{C}{8}}{2C}$$ This split capacitance technique basically allows for total capacitor size reduction of the whole DAC as the "total weighting factor" in e.g. a tradidional CDAC here is split into two parts, which if you do the simple maths reduces cap size/area. Here is a link to the original paper from 1979.

Sadly with this my sunny day off is over!

Date:Sun Jun 06 18:37:00 CEST 2014


Some thoughts on 1/f noise.

Now that it's spring here in Norway, the nights are getting shorter and shorter, which on the contrary means that one wakes-up from time to time at early morning 03-04:00 a.m. due to the rising sun and in particular the birds singing around the forest.

One of the mornings last week my sleep was disturbed by a few ravens and crows measuring powers in the very prestigious annual contest "Ugly craw" organized by the local animal union committee, headed by the main moose Mr. Elg. I noticed that while the ravens were taking part in the contest, all other (smaller) types of birds continued to "applaud" all the time while the ravens were fighting. While I was half-asleep the term 1/f noise struck my mind.

Of course it is widely known that many natural phenomena follow a 1/f distribution, or in simple words the higher the power the less frequent the event would be and vice-versa. Half-asleep I did try to think and correlate the birds' songs or the species themselves. In one of the other days I decided to record some of the bird songs and perform a DFT for various recorded samples and then tried to average the f-plots to see whether there is any such 1/f dependency. The results which I show below might not be a full success due to various non-idealities and limited sample sets, but at least they hint an already known fact about 1/f and possibly show that a number of things might go wrong even with a simple time-frequency transformation as the DFT I used.

There are a number of papers about 1/f in human cognition, if interested, I suggest looking at:

"1/f noise in music and speech", Richard Voss, John Clarke, Nature vol. 258, November 27, 1975

"1/f noise in human cognition", D. Gilden, T. Thornton, Science vol. 267, March 24, 1995

"1/f noise a pedagogical review", Eduardo Milotti

To provide a picture of the 1/f occurrence in many "systems" I dare to provide a reprint from "1/f noise" Lawrence M. Ward and Priscilla E Greenwood (2007), Scholarpedia

Examples of 1/f noises occurring in many systems, source:Scholarpedia

This figure is staggering!

Let's start by hearing the bird sample I used.

You might notice that the electronic hum from the microphone preamp of my computer is simply in the same order of magnitude as the birds' songs. This makes these samples very difficult to analyse as we are interested only in the bird content and any other noise (electronic, numerical etc...) should be low. A somewhat higher dynamic range samples would have given a better start.

There is a number of ways to perform this measurement and some might argue, is it the various bird species' songs combined together from a listener's point of hearing that should all be counted for the 1/f measurement, or, is it a single bird, that should be isolated for the measurement? In the current case I possibly isolate a set of birds (bigger species) as I try to filter-out high frequencies and focus my DFT only on the range up-to 60Hz. Here is a block diagram of the signal chain for my analysis:

An overview of the audio signal chain.

After reading the input sample in wav format, a 10th order low-pass butterworth filter with a cutoff frequency of 10kHz is applied. Further after sample squaring the signal is fed through a second filter of the same type with a cutoff of 60Hz. After this extraction (with some non-idealities from the filters) the continuous-time sample information up-to 60Hz is represented in the f-domain by DFT.

Now the question arises, how large window should one have to get accurate enough DFT information for very very low frequencies 0.1Hz - 10Hz? Possibly also another question, how long sample should one have in order to be able to obtain good (with enough oversampling) 1/f plots? We know that the frequency resolution is dependent on the relationship between the input signal sampling rate and the DFT window length. In the current case we have a sampling frequency of 44.1kHz then if we collect 1024 samples (pretty standard number) for the DFT we will have a frequency bin resolution of:

This size is clearly not enough, so as a suggestion to get a 0.1Hz resolution we need about:

We can see that for such low frequencies we need a significant size of the DFT window. This on the other hand would impose that the length of our "bird song" sample must also be quite long to get some meaningful averaged 1/f plots. For instance the 441k samples relate to about 10 seconds of sample time.

Now these birds have a tendency to make quite significant pauses between their squawks sometimes even over a few minutes. If we also need to follow the basic engineering rule of thumb that in order to get any reasonable data the size of the window should be about 5 times the minimum window criteria we get 2.205 Msamples. Taking such a huge sampling window would grately degrade the temporal resolution for the analysis. We still don't care about frequencies beyond 40-60 Hz, but still. For example, if a craw has been squawking 15 times for 25 seconds and later-on during the next 25 seconds (from our sample window time) 20 times, we would still see peaks for both frequencies. The temporal resolution becomes even worse for smaller birds, which squawk even more frequently.

All these simple facts make this 1/f analysis quite subjective with the methods used, strictly speaking, a few minute only samples with low dynamic range and having to trade-off between temporal and frequency resolution. Nevertheless here are some plots of a few samples, not only "bird" content.

Some bird samples.

Some bird 1/f approximations. alpha ~= 1.3

A news emission of the Bulgarian National Radio on 17 May 2014

A news emission 1/f approximations. alpha ~= 1.9

Pink Floyd's Comfortably Numb guitar solo from the 1994 P.U.L.S.E. (The Division Bell tour) concert in Earls Court, London.

Pink Floyd's Comfortably Numb guitar solo 1/f approximations. alpha ~= 1.5

Would you call it spectral leakage, poor temporal/frequency resolution tradeoff, non-integer sampling, poor SNR of the audio samples, filter distortion due to passband ripple, the plots somehow do not look very clean. One is certain, the dependency is 1/f^alpha and alpha measures roughly between ~1.3 for the birds' songs, ~1.9 for the news emission and ~1.5 for the guitar solo.

Even injecting a 50Hz signal to the samples shows signs of some distortion from the ripple in butterworth filters, plus non-integer sampling.

50 Hz sinewave fed through the 10 and 5th order butterworth filters

You can find the octave scripts here.

All aforementioned was/is a nice exercise showing that often applied analyses and measurements require compromises which are only up to the engineer's cognition. There is no very right or sharply wrong approach in this analog world. As for the 1/f, a question arises, is this "law" also applicable to human stereotypes?

Date:Sun May 25 18:56:00 CEST 2014


Teaching analog design in an esoteric fashion.

Occasionally I browse through the pages of some of the Bulgarian academic and research centres focusing in the field of IC design. This week, I have been enjoying the pages of Cyril Mechkov, a teacher in analog circuit theory in the Technical University of Sofia. I am more than impressed by his methods of explaining circuits, avoiding derivation of complex transfer functions and formulas, but instead guiding the students with intuitive explanations and examples related to everyday life.

Apart from Mechkov's circuit-fantasia website, he has also uploaded all his work in wikibooks, a book called Circuit Idea. A great fantasy has struck him - trying to involve students taking his courses to actively participate in the book development and have this as a micro assignment. I feel this great work needs somehow more attention and this is partly why I am writing about him. Here is a simple illustrative example on his thoughts and ways of explaining things:

As simple phenomenon as voltage drop over a resistor is explained by Mechkov in the following way. Imagine a large water tank that is connected to smaller vessels of the same height. The water tank is full and the end of the tap is closed. Mechkov's hand-drawn diagram:

The local pressures along a tapped pipe are equal to the input pressure, source: Cyril Mechkov, Circuit Idea

Now if one opens the far end of the pipe water will start flowing accordingly, therefore the pressures would decrease gradually according to basic hydraulic principles.

The local pressures along a tapped pipe decrease gradually, source: Cyril Mechkov, Circuit Idea

A very simple analogy could be made with a resistor and the voltage drop over it. Two analogies with voltage drop follow:

No voltage drop if no current flows, source: Cyril Mechkov, Circuit Idea

And the other way around:

If current is drawn, then the voltage drops linearly, source: Cyril Mechkov, Circuit Idea

The wikibook is enriched with figures following the same intuitive fashion accompanied with solid explanations and finally mathematical formulae (offtopic: oh this fancy way of writing such a simple word) covering the basic circuits in-depth.

Come to think of it, I did not find in his records an explanation of the Miller effect. Well, this is my trial for drawing an intuitive figure about the Miller effect:

My trial to explain the Miller effect with manikins pulling a rope through a system of reels. Poor phone camera picture plus an attempt to apply color threshold filtering.

At first sight this looks a rather funny way of explaining it. Our poor single manikin pulling down, whilst a bunch of other strong guys are counteracting to our single boy. So, the higher the gain (A), or transconductance (gm) in the case here, the stronger the guys would be, thus the miller cap - . Well, one should represent Cgd in another way to get a better picture, but still.

Ah well, if not for educational purposes, this might make out a good nerdy t-shirt:

The t-shirt fashion trends next year.

Happy last six hours of the weekend :)

Date:Sun May 11 18:05:00 CEST 2014


Computing and the human brain / Neuromorphic Image Sensors.

Lately a colleague of mine has been working on his master thesis which involves the design of a vision sensor inspired by principles occurring in the human brain. The idea behind his and in general neuromorphic vision sensors / neuromorphic computing is brilliant, and in the same time extremely challenging to fully understand and reproduce with existing silicon VLSI technologies.

Vision sensors emulating the human retina.

Getting inspired by the principle I'll try to cover the basic idea behind human retina emulation sensors in a nutshell. I will also try to give a history de-brief of computing inspired by human brain.

OK, so what vision sensors do we use now in 2014 as a tool for transforming light into digital images? According to various data sources the dominant technology nowadays leads towards CMOS imagers.

Source: iSupply

Slipping off-track with the chart above, we can stress that probably 99% of all mass-produced machine vision sensors base on raw image data extraction and processing. While there has been a very significant progress in image recognition and a number of very successful machine vision algorithms have been invented (3D object, scene, texture recognition etc...), there is still a huge gap between the performance of the aforementioned algorithms, as compared to even the most primitive biological species as insects. One of the key differences is the way the data processing and analysis is done.

An ordinary machine vision system today would capture and process the full number of pixels and frames which a raw data vision sensor would provide. While capturing full frame data relaxes the complexity of used vision analysis algorithms, it imposes a tremendous computational and data-transfer bottleneck on the analytical device, in other words the computer. The key "feature" which biological retina stands-out from "ordinary" raw data vision sensors hides in the information capture and transfer between the imager and the processing device. I.e. a bio retina would trigger, send and process only newly entered light information into it. This key-feature avoids the further computational and data-transfer bottleneck. In a nutshell a bio-inspired vision sensor would send information only for newly triggered pixel events.

Going back to my colleague's vision sensor and the very basic principle of operation of integrate-and-fire neuron pixels. Instead of reading-out an absolute voltage level, bio-inspired integrate-and-fire neuron pixels would instead generate a trigger event with change of light intensity. Here is a very primitive example of such an event generating (spiking) circuit.

Axon-hillock circuit as described by C.A. Mead in Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

The input current coming from the photo diode pixel is integrated over a capacitor Cm, as the integrated voltage on the top plate of Cm increases until it reached the threshold level of the CMOS inverter-based buffer. When the buffer switches the output voltage changes to Vdd switching-on the reset transistor. The positive feedback capacitor is Cf forms a capacitive divider with Cm and acts as a pulse sharper. By controlling the bias voltage on the reset transistor branch one can control the sensitivity of the integrator and the duration and number of the spikes. Here is a link to Mead's original publication.

OK, so what do we do with all these spiking pixels? The second part of the puzzle hides in a tracking system that senses and takes decisions on which pixels generate spikes and which did faster. There seem to be a number of various architectures but all of them base on WTA circuits as well as Address Event Representation (AER) protocols.

Having determined the spatial location of the events, the newly generated scene information can be further supplied to a machine vision processing system. A comprehensive paper by Giacomo Indiveri et. al. is Neuromorphic silicon neuron circuits.

Here are some pictures of my visit to Lukasz's lab and his neuromorphic vision sensor.

The camera setup.
Front-side lens.
Overview of the testbench. The scope was hooked-up to a testbus measuring a test-pixel's output.
Coherent red LEDs used as a light source.
A live view, the output from the imager is noisy due to wrong pixel bias voltages.
The testbench from a slightly different angle.

Some history de-brief on neuromorphic computing.

Possibly one of the first elaborate publications on bio-inspired computing tracks back to 1958 and John von Neumann's last book entitled The Computer and the Brain, where he gives diambigulations and analogies between existing computing machines and the living human brain. A general conclusion is that the brain functions in a partly analog and digital domain.

Later Carver Mead published the first ever book on neuromorphic computing Analog VLSI and neural systems. It gives a good representation of AI principles applied in analog VLSI systems.

Speaking of bio-inspired vision sensors, the first publication and bio-inspired vision sensor was reported by Misha Mahowald a student of Mead, her 1988 publication A silicon model of early visual processing describes an analog model of the first stages of retinal processing.

Well, with this my inspiring Saturday afternoon finises, hmmm... would all major electronic systems be bio-inspired one day???

Date:Tue May 03 17:05:00 CEST 2014