The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Beta expansion ADCs

Recently, I conducted a quick investigation on esoteric ADC architectures which have probably never left the testchip arena, or if they have, we do not know much about it due to the usual corporate masquerade policies. I want to throw my collection of links to you readers and try to popularize the otherwise very niche field.

One of the esoteric, or rather unexplored areas of ADCs is beta-expansion number representation. Beta-expansion is practically a complex way of saying a non-integer radix number usually falling between radix 1 and 2 when we talk about it in the context of ADCs. Other synonym terms can also be found, such as: "golden ratio encoder", "beta encoder", "non-integer encoder", "flaky quantizer" etc...

The beta encoding technique is mainly applicable to residual types of ADCs such as the pipelined, cyclic and SAR ADCs. It may be that it is also applied to some of the other types of existing ADC architectures however, I am not aware how it could be used in an integrating or flash ADC for example. To make this post more informative, let me give you a brief example of beta quantizers with a pipelined ADC. Here is the transfer function of a 1.5 bit/stage MDAC:

You can see that there is already added redundancy of 0.5 bits, which together with the classic back-end digital error correction algorithm works very efficiently in getting rid of the offset errors in the comparators and MDAC OTA, to up to levels of 1/4 the reference voltage. The figure above isolated only the offset errors, now if we look at the produced gain errors from the OTA and capacitors in the MDAC we get a different form of transfer function distortion:

It is quite clear that the classic stage-redundancy-based digital error correction would not work in the case of residue gain errors, so we would get some rather nasty DNL code gaps. It is very important to mention here that gain errors in the MDAC stage, result in radix deformations in the converted values. These deformations could be tilting towards either higher or lower than radix-2 digital number, usually dependent on capacitor matching and OTA gain.

So if we know that the radix is linearly shifted, we could technically try to correct it with a single multiplication. Here comes the beta-encoding part, which aims to modify the MDAC in advance such that it is intentionally producing redundant code i.e. having a radix lower than 2. Radixes higher than 2 result in binary code gaps which are non-correctable and have no redundancy. Designing for a lower radix code means that we could use a constant-low-gain OTA and intentionally mismatched sampling capacitors in the MDAC. The uncorrected output of such a converter would then look more like in the figure below:

Thus, if we measure two points from the ADC's transfer function (A and B) we can determine the produced radix by using an iterative method. The radix error based on our swept radix search parameter $\beta_{sw}$ and the two points (A and B) from the transfer curve of the ADC would be: $$\sigma(\beta) = \sum_{n=1}^{n=12} (D_{n_{A}} - D_{n_{B}})\frac{1}{\beta_{i}} $$

That fancy equation tells us that the radix shift is linear and can be determined, we have only two points... The non-linearity shown on the graph below comes from $\beta_{i}$ which is our estimation/guess. Beta codes show redundant patterns which is why we see that non-linearity in the radix estimation error.

Once we have estimated the radix, we can apply conventional floating-point multiplication to bring back the A/D converted code to radix-2. All of the above sounds like a great idea, but hmmm...

1) We got rid of the high gain requirement in the OTA, but pushed the problem into designing a constant-low-gain OTA. Which is nearly as non-trivial as designing a very high gain OTA.

2) In the case of a pipelined ADC, capacitor mismatch between the stages is still an issue.

3) In the case of a cyclic ADC the beta-encoding technique might actually work well, as we have a single capacitor pair for all conversions

4) Requires a costly floating-point multiplication as well as an initial radix estimation and coefficient storage

5) 4) makes the implementation in an ADC array nearly impossible, or not worth due to the huge number of coefficients and multiplications required

All of the above facts make the classic beta expansion technique at first sight not extremely attractive, however some of the properties of redundant codes make them worth having a deeper look. There has been ongoing research on beta expansion for a while now and I sincerely hope that it would continue in the future. Here is my list of collected links on the topic:

Papers:

Boris Murmann, On the use of redundancy in Successive Approximation A/D Converters

Daubechies et. al., Beta Expansions: A new approach to digitally corrected A/D conversion

Suzuki et. al., Robust Cyclic ADC Architecture Based on Beta-Expansion

San et. al., Non-binary Pipeline Analog-to-Digital Converter Based on Beta-Expansion

Rachel Ward, On Robustness Properties of Beta Encoders and Golden Ratio Encoders

Biveroni et. al., On Sequential Analog-To-Digital Conversion with Low-Precision Components

Daubechies et. al., The Golden Ratio Encoder

Kohda et. al., Negative Beta Encoder

Pages:

The "Fibonacci Code" viewed as a Barcode

The "Fibonacci Code" viewed as a Serial Data Code

Conversion between Fibonacci Code and Integers

Finally, something pretty neat and "chaotic" :

Beta-expansion's Attractors Observed in A/D converters

Date:Sun Nov 22 17:53:23 CET 2015

Comments

No comments yet
*Name:
Email:
Notify me about new comments on this page
Hide my email
*Text: