The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Beta expansion ADCs

Recently, I conducted a quick investigation on esoteric ADC architectures which have probably never left the testchip arena, or if they have, we do not know much about it due to the usual corporate masquerade policies. I want to throw my collection of links to you readers and try to popularize the otherwise very niche field.

One of the esoteric, or rather unexplored areas of ADCs is beta-expansion number representation. Beta-expansion is practically a complex way of saying a non-integer radix number usually falling between radix 1 and 2 when we talk about it in the context of ADCs. Other synonym terms can also be found, such as: "golden ratio encoder", "beta encoder", "non-integer encoder", "flaky quantizer" etc...

The beta encoding technique is mainly applicable to residual types of ADCs such as the pipelined, cyclic and SAR ADCs. It may be that it is also applied to some of the other types of existing ADC architectures however, I am not aware how it could be used in an integrating or flash ADC for example. To make this post more informative, let me give you a brief example of beta quantizers with a pipelined ADC. Here is the transfer function of a 1.5 bit/stage MDAC:

You can see that there is already added redundancy of 0.5 bits, which together with the classic back-end digital error correction algorithm works very efficiently in getting rid of the offset errors in the comparators and MDAC OTA, to up to levels of 1/4 the reference voltage. The figure above isolated only the offset errors, now if we look at the produced gain errors from the OTA and capacitors in the MDAC we get a different form of transfer function distortion:

It is quite clear that the classic stage-redundancy-based digital error correction would not work in the case of residue gain errors, so we would get some rather nasty DNL code gaps. It is very important to mention here that gain errors in the MDAC stage, result in radix deformations in the converted values. These deformations could be tilting towards either higher or lower than radix-2 digital number, usually dependent on capacitor matching and OTA gain.

So if we know that the radix is linearly shifted, we could technically try to correct it with a single multiplication. Here comes the beta-encoding part, which aims to modify the MDAC in advance such that it is intentionally producing redundant code i.e. having a radix lower than 2. Radixes higher than 2 result in binary code gaps which are non-correctable and have no redundancy. Designing for a lower radix code means that we could use a constant-low-gain OTA and intentionally mismatched sampling capacitors in the MDAC. The uncorrected output of such a converter would then look more like in the figure below:

Thus, if we measure two points from the ADC's transfer function (A and B) we can determine the produced radix by using an iterative method. The radix error based on our swept radix search parameter $\beta_{sw}$ and the two points (A and B) from the transfer curve of the ADC would be: $$\sigma(\beta) = \sum_{n=1}^{n=12} (D_{n_{A}} - D_{n_{B}})\frac{1}{\beta_{i}} $$

That fancy equation tells us that the radix shift is linear and can be determined, we have only two points... The non-linearity shown on the graph below comes from $\beta_{i}$ which is our estimation/guess. Beta codes show redundant patterns which is why we see that non-linearity in the radix estimation error.

Once we have estimated the radix, we can apply conventional floating-point multiplication to bring back the A/D converted code to radix-2. All of the above sounds like a great idea, but hmmm...

1) We got rid of the high gain requirement in the OTA, but pushed the problem into designing a constant-low-gain OTA. Which is nearly as non-trivial as designing a very high gain OTA.

2) In the case of a pipelined ADC, capacitor mismatch between the stages is still an issue.

3) In the case of a cyclic ADC the beta-encoding technique might actually work well, as we have a single capacitor pair for all conversions

4) Requires a costly floating-point multiplication as well as an initial radix estimation and coefficient storage

5) 4) makes the implementation in an ADC array nearly impossible, or not worth due to the huge number of coefficients and multiplications required

All of the above facts make the classic beta expansion technique at first sight not extremely attractive, however some of the properties of redundant codes make them worth having a deeper look. There has been ongoing research on beta expansion for a while now and I sincerely hope that it would continue in the future. Here is my list of collected links on the topic:

Papers:

Boris Murmann, On the use of redundancy in Successive Approximation A/D Converters

Daubechies et. al., Beta Expansions: A new approach to digitally corrected A/D conversion

Suzuki et. al., Robust Cyclic ADC Architecture Based on Beta-Expansion

San et. al., Non-binary Pipeline Analog-to-Digital Converter Based on Beta-Expansion

Rachel Ward, On Robustness Properties of Beta Encoders and Golden Ratio Encoders

Biveroni et. al., On Sequential Analog-To-Digital Conversion with Low-Precision Components

Daubechies et. al., The Golden Ratio Encoder

Kohda et. al., Negative Beta Encoder

Pages:

The "Fibonacci Code" viewed as a Barcode

The "Fibonacci Code" viewed as a Serial Data Code

Conversion between Fibonacci Code and Integers

Finally, something pretty neat and "chaotic" :

Beta-expansion's Attractors Observed in A/D converters

Date:Sun Nov 22 17:53:23 CET 2015

LEAVE A COMMENT | SEE OTHER COMMENTS



A must-read book for every analog designer

for both practicing analog designers and those who are thinking of becoming one...

The Art and Science of Analog Circuit Design, is a book edited by Jim Williams, an analog guru who spent significant amount of his career teaching postgaduate students in analog electronics design at MIT, he was also a circuit designer with Linear and National Semiconductor. A brilliant writer, who is unfortunately not with us on this world anymore.
      A few chapters of his book are a must-read for every analog engineer as well as everyone who is thinking of going towards the analog electronics design path. I would like to "advertise" some of his most brilliant works based solely on my personal views.
      It is more than just another circuit design book, it is an adventure through the world of analog design combining theory and real world examples with deep philosophies behind the design process. Apart from design aspects, this book would tell you why you should or should not attempt going on the dark side of the moon. For those of you who have already landed there, by opening "Part 1: Learning How", you will find a mirror of yourself on every page.


This is a weird book. When I was asked to write it I refused, because I didn't believe anybody could, or should, try to explain how to do analog design.


      The weird collection of articles by Williams however, by far does not finish here.
Analog Circuit Design: Art, Science and Personalities - it is the predcessor of the firstmentioned in the post book, released some 9ish years before the release of its second edition. Even though that both books share somewhat similar names, the latter covers far deeper and heavily philosophical analog design paradigms. Some of my personal favourite chapters include:

  Analogs, Yesterday, Today, and Tomorrow, or Metaphors of the Continuum

  Reflections of a Dinosaur

  The Zoo circuit: History, Mistakes, and Some Monkeys Design a Circuit

  Propagation of the Race

As you might notice, all the articles included in both books and edited by Williams are written by practicing analog gurus. But let's hop over to some less philosophical works from Williams, yes, you guessed it right - technical notes. Here is my proposal:

  Switching Regulators for Poets - A Gentle Guide for the Trepidatious

  Bridge Circuits - Marrying Gain and Balance

  1ppm Settling Time Measurement for a Monolithic 18-Bit DAC - When Does the Last Angel Stop Dancing on a Speeding Pinhead?

  Slew Rate Verification for Wideband Amplifiers - The Taming of the Slew

One last link to share with you - check out doctoranalog's blog entirely dedicated to Williams and the early silicon era gurus.

So, to summarize, I am getting even more confused, does analog design take a heavy part in the philosophy of science then? Also, how about coining a new book genre - "philosophical analog science fiction"? :)

Date:Sun Oct 27 23:07:51 CET 2015

LEAVE A COMMENT | SEE OTHER COMMENTS



An IC Designer's Nightmare

An IC Designer's Nightmare

Date:Sun Aug 21 22:18:11 CET 2015

LEAVE A COMMENT | SEE OTHER COMMENTS



Bit-Banged SPI for the MCP3008 ADC chip

I wanted to interface Microchip's 12-bit ADC chip (MCP3008) to the Olinuxino MICRO board, which uses the Allwinner A20 processor. Unfortunately, no kernel module supporting full-duplex SPI mode exists yet, or at least I was not able to find a working one. If we can not write and read at the same time we are limited to reading only 8bits from the ADC, which is otherwise a 12-bit SAR converter.

Here's some code implementing a bit-banged SPI specifically tailored to read out the MCP3008. It is written in python and uses the pyA20 GPI/O library by Olimex.

Date:Sun Jul 25 22:27:15 CET 2015

LEAVE A COMMENT | SEE OTHER COMMENTS



People in science and engineeing - Part 1

and the never ending rivalry


Lately the topic about science and engineering has become quite hot in our group. It is probably the fact that we always have lunch in the neighbouring buildings hosting the theoretical physics and mathematics departments. Being almost the only engineers having lunch in this canteen, otherwise packed with sciencey' theoretical physics people, I started noting differences between them and us. I am sharing some of my observations as an engineer. I do not aim to put more gasoline into the fire here, but instead note some of the aliasing artifacts between "them" and "us" in a friendly manner.

A fact, people who are into pure theoretical sciences are usually very sharp and confident in their fields. This however allures them to think that they know everything. Well, they don't. It isn't possible to learn and know everything of course, and in general, the more you know the more you learn that you know nothing. And because theoretical/natural scientists often study and work on very nieche problems they kind of lose the whole picture. Don't get me wrong, I am not saying they are stupid, which is a typical illusion they get by seaking to "us". People within pure natural sciences are very eager to have discussions within their comfort zone (whatever their field is) however when we step-over to other fields they often misbehave. Because it often happens that they know nothing about the other fields, often, their first reaction is to become arrogant and try to defend themselves. My experience shows that their defense is usually expressed in insults and explanations on how easy it is to solve "this" or "that" engineering or whatever problem. Let me give you some brief examples:

Not a long time ago I had a discussion with a friend (mathematician) with whom I somehow ignited a discussion about the MP3 compression format. I politely asked her if she knows the basic principle behind MP3 encoding wanting to share some thoughts on types of music and compression density. An immediate quite confident reaction was followed with the words "Yes, fourier transform!", with that the topic was exhausted. Well, not quite, she feels very comfortable with Fourier transformations and is probably quite good at it, however this by far does not stretch over the basic principle. When I tried giving a brief explanation, instead of listening carefully to my "intuitive engineering explanation" on the psychoacoustic model of the brain and f-domain truncations, she simply ignored and did not listen to any of my words. Head was up and pride rockets in the sky - it is Fourier transformation, too smart to listen to "intuitive stuff".

Similarly back in the old days during my undergratuate a bunch of chief science research assistants at the university were trying to build an optical pipe inspection instument. This tool consisted of eight push rod micrometers, arranged in a hexagonal shape, which were supposed to measure the pipe's geometrical shape by frictioning over it. In such industrial environments, when the pipe is formed it moves with about 1m/s, it is very easy to guess what would happen if you have sensitive measurement tools sliding on the pipe with that speed. Not only it did not work, but the measurement and correction electronics was built as a circus joke. Because they were so bad at electronics systems design, all they had was an 8-bit microcontroller reading the 10b ADC and spitting out data over UART. I am totally not joking, they had one microcontroller per channel, so eight in total. All because they could not figure out how to read more than one AD channel per MCU, so instead, yeah!, why not buy and put eight instead. Supposedly the whole system had to control the pipe forming rollers with a feedback over UART? Yes! UART feedback with a pipe travelling at 1m/s. Nevertheless, they were very proud of their work, which is okay to a certain extent, in the end they all had a physics / control theory background.

So why is this a never ending rivalry? My answer here is ACADEMIA! It is in academic institutions that you can mostly find arrogant species in theoretical sciences. In the real world, scientists do some engineering and engineers do some science, the last however never applies to academia. The whole respect and ranking systems in academia are all based on the "who's smarter model" and the number of cool publications in fancy pumped-up journals made. However, the "who's smarter" debate is old and dumb and dies out the moment you leave academia and face the real world challenges. Those who claim to be true mathematicians, physicists, chemists, scientists and renounce engineering profession are primarily pure academics, there are exceptions to this of course. Sadly a large portion of universities encourage separation between theoretical and applied sciences, which deforms the minds of graduates and due to this we end up talking about rivalry and arrogance.

Enough talking about competition, there are more interesting facts to cover on people's stereotypes, based on their fields of work, no matter if these are pure or applied sciences or engineering. Stereotypes contain enough truth to be humorous but also quite objective. We all do some sort of preliminary judjements based on stereotypes, probably quite a bit more often than we are brave to admit. Stereotype models give us fast and efficient cognitive shortcuts and save us a lot of energy and time. I will soon try to elaborate on the stereotypes of people in science and engineering in Part 2.

Date:Sun Jul 05 15:36:14 CET 2015

LEAVE A COMMENT | SEE OTHER COMMENTS