The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocs about & FAQ

Reading raw PCM data files into an array in ANSI C

I am working on the Rx part of the RTTY decoder. The first stage to start with is to find a way to read the encoded by the Tx in my previous post raw PCM file and dump it into an array. A natural act was to look online for already existing code for reading little endian 16-bit raw PCM files, to my surprise a lot about PCM is written out there, but strangely nothing in pure ANSI C which is small and selfcontained enough to be copied and used directly. An equivalent in function ANSI C code I am posting here probably exists out there online, but I will anyway print my implementation here, hopefully anyone would benefit from it. It is also a good way of storing it here for my own reference.

// Reads raw PCM file format and prints the samples in signed integer format on the standard output.
// Initial version A, Deyan Levski,, 15.07.2014

#include stdio.h
#include stdlib.h
#include math.h
#include string.h

#define SAMPLE_RATE 44100 // Hz
#define BITS_PER_SAMPLE 16 // bits
#define N_SAMPLES 5000 // n

int convertBitSize(unsigned int in, int bps)
        const unsigned int max = (1 << (bps-1)) - 1;
        return in > max ? in - (max<<1) : in;

int readPCM(int *data, unsigned int samp_rate, unsigned int bits_per_samp, unsigned int num_samp)

	FILE *fp;
	unsigned char buf;
	unsigned int i, j;

	fp = fopen("ttyburst.pcm", "r");

        for (i=0; i < num_samp; ++i) {

		unsigned int tmp = 0;

		for (j=0; j != bits_per_samp; j+=8) {
			fread(&buf, 1, 1, fp);
			tmp += buf << j;

		data[i] = convertBitSize(tmp, bits_per_samp);


int main(void)


	int *data = (int *) malloc(N_SAMPLES * sizeof(int));


	unsigned int i;
	for (i = 0; i < N_SAMPLES; ++i) {
		printf("%d\n", data[i]);

return 0;


Compiles without errors on a standard gcc compiler with standard ANSI C option set. When run, the executable would look for ttyburst.pcm in the root folder and print all samples on the standard output. The raw PCM is assumed to be little endian, signed 16 bit, sampling rate and number of samples are set with the #define statements. Other formats can be supported with minor code modifications.

Date:Tue Jul 15 22:19:53 CEST 2014


Baudot/Murray/ITA2 codes

This post is a proof that losing internet for some time can in fact boost your productivity. Or at least it made me go back to an old and well forgotten project of mine. Back in ?2005? I was very excited about acquiring an MW or HW band transciever and trying get in contact with some radio amateurs. Unfortunately I only ended up experimenting with my home-made pirate FM 88-108 MHz transmitter hooking it up to my PC's sound card and using some radio amateur PSK bursting software to transmit text to another PC, which's sound card was connected to an FM receiver. It did actually work pretty well, tranferring data at about 300~600 baud.

As I do not have internet for a while now, I have decided to spend some time on home-made FSK encoding program. I know there is a plenty of choice when it comes to radio amateur software, but this helped me to slightly rub-off my rust in C and have some fun too.

Baudot/Murray/ITA2 codes, also commonly referred to as five-unit codes are practically the second evolution of the Morse code and were extensively used back in the old days in telegrapy, systems also known as TELEX, machine programming/execution punched cards and many other ingenious well forgotten systems nowadays. Basically a five-bit code is used for alphabet encoding. Apart from the standard (capital letter only) alphabet characters, these systems include also some special symbols as, carriage return, line feed, blank etc...

This page is a good source giving a brief overview of the ITA2 system, however for convenience I am listing the ITA2 (International Telegraph Agency 2) encoding table I used here:

Character 5-bit encoding Character 5-bit encoding
A 11000 B 10011
C 01110 D 10010
E 10000 F 10110
G 01011 H 00101
I 01100 J 11010
K 11110 L 01001
M 00111 N 00110
O 00011 P 01101
Q 11101 R 01010
S 10100 T 00001
U 11100 V 01111
W 11001 X 10111
Y 10101 Z 10001
CR 00010 SPACE 00100
Parts of the ITA2 encoding standard.

In order to be able to distinguish between the character bit-bursts however the teletype systems use start and stop bit sequences. I found various sources online, which suggest different bit sequences. The ones I have used are based on the information provided by this website. As I am not going to contact anyone I didn't really care so much about that. The following image shows a single character burst including start and stop bits. In my program the used start bit is "zero" and as a stop bit I use "one".

Baudot RTTY burst example, as shown in a number of radio-amateur websites.

The most common transmission scheme in the HF amateur bands for RTTY (Radio Teletype) is FSK. This is practically what all the amateur software encpding programs do, read text and "play" FSK bursts encoded in Baudot, when speaking about RTTY of course :). This is what my rusty program also does. Hear some burst examples:

Transmitting the string "I WILL BE BACK HOME SOON PLEASE LEAVE SOME WATER MELONS PEACHES APPRICOTS PLUMS AND GRAPES FOR ME TOO" using 10ms bit bursts with non-integer related shift frequencies (notice the hum/harmonics)
Transmitting the same string but with integer-related shift keying frequencies.
Transmitting the same string but at 100ms burst time.

You might hear the odd low frequency hum in one of the very short burst time samples. This apparently is caused by incorrect stitching of the dual tone sinewaves and can be fixed with proper tuning of the burst time and choice of frequencies such that no "odd stitching effect" appears anymore. Here is what "d-effect" I am referring to:

Sinewave "stitching d-effect" causing discontinuity and therefore harmonics and hum.

This can be fixed by choosing shift keying frequencies together with burst lengths which are integer numbers, for example, $$\frac{f_{FSK_{1}}}{f_{FSK_{2}}} = int$$

Stitching problem solved by using integer proportion shift keying sines, note there is still an issue.

There is still an inssue with my algorithm, I need to delete the last sample of the FSK sine before stitching the new sine, as now there are two repeating samples, which ideally should be one.

I am posting the source as well as the compiled program here, however I should warn you that the code is not very tidy nor efficient. I am basically generating a PCM file, which I then open with and listen to with audacity. I found this to be simplest and most painless format to use.

When importing the PCM you should use signed 16-bit mono, little-endian bit order settings. You can define the sampling frequency from the program, the default used in the precompiled program is 44.1 kHz, however you can choose your own, you will see also some self-explanatory definitions for ZERO_FREQUENCY, ONE_FREQUENCY, SAMPLING_FREQUENCY, BURST_LENGTH etc... The program is reading from a text file named xmit.txt and treats carriage returns as blank space. The file length is also controlled by the MAXTEXTLEN definition.

Link to ttygen and the source code.

Now a question arises, why is it always easier to build a transmitter than a receiver?. In pretty much every aspect in electronics, when you have to deal with transmitters/receivers it is almost always the case that the transmitter is easier to build. Now I have a new project idea, let's write some code which "receives" and decodes these generated transmission sequences ...

Maybe the next time my internet connection drops.

Date:Sun Jul 05 23:02:00 CEST 2014


An animation of a two-stage weighted CDAC

Lately the well-forgotten gif format images have resurrected again with social media and the popular website 9gag in particular. Well, this afternoon I thought, hmmm there are so many idiotic gifs online, why not make something electronics-related. Voila here it is:

A two-stage weighted cap network often used for D/A A/D conversion.

My initial intention was to try and draw moving electrons (charge) on the gif, but it somehow meant a lot more work than I initially thought. The animation here linearly increments bit switches, a bit boring I admit. But hey, I don't need to draw electrons moving, the principle is quite simple (and intelligent). With the current schematic/drawing the idea is very simple (unlike some more sophisticated charge redistribution DACs). We basically form a capacitive division, if we have a look at case 1 (LSB switch connected to Vref), then we have:

Capacitive division formed between the LSB, MSB part and the split capacitor.

We can then simply calculate the output voltage of the DAC with only LSB switch connected to $V_{ref}$ as $$V_{out} = 2V_{ref}\frac{\frac{C}{8}}{2C}$$ This split capacitance technique basically allows for total capacitor size reduction of the whole DAC as the "total weighting factor" in e.g. a tradidional CDAC here is split into two parts, which if you do the simple maths reduces cap size/area. Here is a link to the original paper from 1979.

Sadly with this my sunny day off is over!

Date:Sun Jun 06 18:37:00 CEST 2014


Some thoughts on 1/f noise.

Now that it's spring here in Norway, the nights are getting shorter and shorter, which on the contrary means that one wakes-up from time to time at early morning 03-04:00 a.m. due to the rising sun and in particular the birds singing around the forest.

One of the mornings last week my sleep was disturbed by a few ravens and crows measuring powers in the very prestigious annual contest "Ugly craw" organized by the local animal union committee, headed by the main moose Mr. Elg. I noticed that while the ravens were taking part in the contest, all other (smaller) types of birds continued to "applaud" all the time while the ravens were fighting. While I was half-asleep the term 1/f noise struck my mind.

Of course it is widely known that many natural phenomena follow a 1/f distribution, or in simple words the higher the power the less frequent the event would be and vice-versa. Half-asleep I did try to think and correlate the birds' songs or the species themselves. In one of the other days I decided to record some of the bird songs and perform a DFT for various recorded samples and then tried to average the f-plots to see whether there is any such 1/f dependency. The results which I show below might not be a full success due to various non-idealities and limited sample sets, but at least they hint an already known fact about 1/f and possibly show that a number of things might go wrong even with a simple time-frequency transformation as the DFT I used.

There are a number of papers about 1/f in human cognition, if interested, I suggest looking at:

"1/f noise in music and speech", Richard Voss, John Clarke, Nature vol. 258, November 27, 1975

"1/f noise in human cognition", D. Gilden, T. Thornton, Science vol. 267, March 24, 1995

"1/f noise a pedagogical review", Eduardo Milotti

To provide a picture of the 1/f occurrence in many "systems" I dare to provide a reprint from "1/f noise" Lawrence M. Ward and Priscilla E Greenwood (2007), Scholarpedia

Examples of 1/f noises occurring in many systems, source:Scholarpedia

This figure is staggering!

Let's start by hearing the bird sample I used.

You might notice that the electronic hum from the microphone preamp of my computer is simply in the same order of magnitude as the birds' songs. This makes these samples very difficult to analyse as we are interested only in the bird content and any other noise (electronic, numerical etc...) should be low. A somewhat higher dynamic range samples would have given a better start.

There is a number of ways to perform this measurement and some might argue, is it the various bird species' songs combined together from a listener's point of hearing that should all be counted for the 1/f measurement, or, is it a single bird, that should be isolated for the measurement? In the current case I possibly isolate a set of birds (bigger species) as I try to filter-out high frequencies and focus my DFT only on the range up-to 60Hz. Here is a block diagram of the signal chain for my analysis:

An overview of the audio signal chain.

After reading the input sample in wav format, a 10th order low-pass butterworth filter with a cutoff frequency of 10kHz is applied. Further after sample squaring the signal is fed through a second filter of the same type with a cutoff of 60Hz. After this extraction (with some non-idealities from the filters) the continuous-time sample information up-to 60Hz is represented in the f-domain by DFT.

Now the question arises, how large window should one have to get accurate enough DFT information for very very low frequencies 0.1Hz - 10Hz? Possibly also another question, how long sample should one have in order to be able to obtain good (with enough oversampling) 1/f plots? We know that the frequency resolution is dependent on the relationship between the input signal sampling rate and the DFT window length. In the current case we have a sampling frequency of 44.1kHz then if we collect 1024 samples (pretty standard number) for the DFT we will have a frequency bin resolution of:

This size is clearly not enough, so as a suggestion to get a 0.1Hz resolution we need about:

We can see that for such low frequencies we need a significant size of the DFT window. This on the other hand would impose that the length of our "bird song" sample must also be quite long to get some meaningful averaged 1/f plots. For instance the 441k samples relate to about 10 seconds of sample time.

Now these birds have a tendency to make quite significant pauses between their squawks sometimes even over a few minutes. If we also need to follow the basic engineering rule of thumb that in order to get any reasonable data the size of the window should be about 5 times the minimum window criteria we get 2.205 Msamples. Taking such a huge sampling window would grately degrade the temporal resolution for the analysis. We still don't care about frequencies beyond 40-60 Hz, but still. For example, if a craw has been squawking 15 times for 25 seconds and later-on during the next 25 seconds (from our sample window time) 20 times, we would still see peaks for both frequencies. The temporal resolution becomes even worse for smaller birds, which squawk even more frequently.

All these simple facts make this 1/f analysis quite subjective with the methods used, strictly speaking, a few minute only samples with low dynamic range and having to trade-off between temporal and frequency resolution. Nevertheless here are some plots of a few samples, not only "bird" content.

Some bird samples.

Some bird 1/f approximations. alpha ~= 1.3

A news emission of the Bulgarian National Radio on 17 May 2014

A news emission 1/f approximations. alpha ~= 1.9

Pink Floyd's Comfortably Numb guitar solo from the 1994 P.U.L.S.E. (The Division Bell tour) concert in Earls Court, London.

Pink Floyd's Comfortably Numb guitar solo 1/f approximations. alpha ~= 1.5

Would you call it spectral leakage, poor temporal/frequency resolution tradeoff, non-integer sampling, poor SNR of the audio samples, filter distortion due to passband ripple, the plots somehow do not look very clean. One is certain, the dependency is 1/f^alpha and alpha measures roughly between ~1.3 for the birds' songs, ~1.9 for the news emission and ~1.5 for the guitar solo.

Even injecting a 50Hz signal to the samples shows signs of some distortion from the ripple in butterworth filters, plus non-integer sampling.

50 Hz sinewave fed through the 10 and 5th order butterworth filters

You can find the octave scripts here.

All aforementioned was/is a nice exercise showing that often applied analyses and measurements require compromises which are only up to the engineer's cognition. There is no very right or sharply wrong approach in this analog world. As for the 1/f, a question arises, is this "law" also applicable to human stereotypes?

Date:Sun May 25 18:56:00 CEST 2014


Teaching analog design in an esoteric fashion.

Occasionally I browse through the pages of some of the Bulgarian academic and research centres focusing in the field of IC design. This week, I have been enjoying the pages of Cyril Mechkov, a teacher in analog circuit theory in the Technical University of Sofia. I am more than impressed by his methods of explaining circuits, avoiding derivation of complex transfer functions and formulas, but instead guiding the students with intuitive explanations and examples related to everyday life.

Apart from Mechkov's circuit-fantasia website, he has also uploaded all his work in wikibooks, a book called Circuit Idea. A great fantasy has struck him - trying to involve students taking his courses to actively participate in the book development and have this as a micro assignment. I feel this great work needs somehow more attention and this is partly why I am writing about him. Here is a simple illustrative example on his thoughts and ways of explaining things:

As simple phenomenon as voltage drop over a resistor is explained by Mechkov in the following way. Imagine a large water tank that is connected to smaller vessels of the same height. The water tank is full and the end of the tap is closed. Mechkov's hand-drawn diagram:

The local pressures along a tapped pipe are equal to the input pressure, source: Cyril Mechkov, Circuit Idea

Now if one opens the far end of the pipe water will start flowing accordingly, therefore the pressures would decrease gradually according to basic hydraulic principles.

The local pressures along a tapped pipe decrease gradually, source: Cyril Mechkov, Circuit Idea

A very simple analogy could be made with a resistor and the voltage drop over it. Two analogies with voltage drop follow:

No voltage drop if no current flows, source: Cyril Mechkov, Circuit Idea

And the other way around:

If current is drawn, then the voltage drops linearly, source: Cyril Mechkov, Circuit Idea

The wikibook is enriched with figures following the same intuitive fashion accompanied with solid explanations and finally mathematical formulae (offtopic: oh this fancy way of writing such a simple word) covering the basic circuits in-depth.

Come to think of it, I did not find in his records an explanation of the Miller effect. Well, this is my trial for drawing an intuitive figure about the Miller effect:

My trial to explain the Miller effect with manikins pulling a rope through a system of reels. Poor phone camera picture plus an attempt to apply color threshold filtering.

At first sight this looks a rather funny way of explaining it. Our poor single manikin pulling down, whilst a bunch of other strong guys are counteracting to our single boy. So, the higher the gain (A), or transconductance (gm) in the case here, the stronger the guys would be, thus the miller cap - . Well, one should represent Cgd in another way to get a better picture, but still.

Ah well, if not for educational purposes, this might make out a good nerdy t-shirt:

The t-shirt fashion trends next year.

Happy last six hours of the weekend :)

Date:Sun May 11 18:05:00 CEST 2014