I have been recently looking at various ADC architectures employing coarse-fine interpolation methods using TDCs, so I thought I'd share a quick list with brief explanation of the most common TDC architectures.
1. Counter based TDC – Probably the simplest method to digitize a time interval is by using a stopcounter. Counter based TDCs use a reference clock oscillator and asynchronous binary counters controlled by a start and a stop pulse. A simple timing diagram depicting the counter method's principle of operation is shown below:
The time resolution of counter based TDCs is determined by the oscillator frequency and its quantization error is generally dependent on the clock period. Counter method TDCs offer a virtually unlimited dynamic range, as with the addition of a single bit the counter's capacity is doubled. This comes at the cost of an increased quantization error. The start signal can be either synchronous or asynchronous with the clock. In the case of an asynchronous start and stop pulses its quantization error can be totally random. Similarly if the start pulse is synchronous with the clock, this TDCs' quantization error is fully deterministic, if we exclude second order effects from clock and start pulse jitter respectively. With current CMOS process nodes 45nm or so, the highest achievable resolutions vary in the order of a few nanoseconds to 200 picoseconds. The counter based TDC's resolution is generally limited by the reference clock frequency stability and the metastability and gate delay of the stop latches. To increase the counting speed an often reffered to as Double Data Rate (DDR) counting scheme can be employed, which uses a counter incrementing its value on both rising and falling edges of the count clock.
2. Nutt interpolated TDC (Time Intervalometer) – initially proposed by Ronald Nutt [1], the Nutt architecture is based on measuring the quantization errors formed in the counter method and subtracting these to form the final counter value. The sketch below depicts the basic principle of the Nutt method:
Typically a short-range TDC is used for synchronous fine quantization error measurement between the counter's clock and the stop pulse. The used short-range TDC as a fine quantizer can be of any type as long as its input range matches with the largest expected quantization error to be measured, which is the counter's clock period $T$. In case of the use of a Double Data Rate (DDR) counter the input range is reduced to $T/2$. The global TDC presicion of a TDC employing the counter scheme with Nutt interpolation is improved by a factor of $2^{M}$ where M is the resolution of the fine TDC in bits.
The challenges the design of Nutt interpolator based TDCs are generally linked with the difficulty of matching the gain of the fine TDC with the clock period of the coarse counter. Both INL and DNL of the fine interpolation TDC is translated as a static DNL error at the combined output. Moreover, any noise in the fine TDC also translates as a DNL error in the final TDC value, which creates a non-deterministinc DNL error performance which cannot be corrected for.
3. Flash TDC – This is probably one of the simplest short-range TDC architectures. It uses a clock delay line and a set of flip flops controlled by the stop pulse for strobing the phase-delayed start pulse.
It can be employed in a standalone asynchronous scheme, where the start pulse is fed to the delay line and the stop pulse gates the flip flops. Alternatively it can be used as an interpolator to the counter scheme, in which case it's start pulse is controlled by the low-frequency count clock. In the case of the last configuration it is important that the sum of the delays in the delay line matches with the clock period $T$, or $T/2$ in the case of a DDR counting scheme.
Typically one way of delay synchronization is the deployment of a phase locked loop, which keeps the last delayed clock in-phase with the main count clock period. Usually the choice of delay number is based on binary sets of $2^{N}$. The strobed value in the flip flops is thermometer coded, which is consecutively converted to binary and added to the final value. Alternatively a scheme employing a ring oscillator incrementing the binary counter can also be used. In such schemes the complexity is moved from a PLL design to a constant-gain ring oscillator challenge.
An important aspect within Flash TDCs is that their time resolution is still limited to a single gate delay of the CMOS process.
4. Vernier TDC – The Vernier TDC, compared to Flash TDCs aims to improve the converter resolution beyond the gate delay limit. A principle diagram of a classic Vernier architecture is shown below:
Two sets of delay chains with a phase difference are used, where the stop signal delays use a slightly smaller time delay. This causes the stop signal to slightly catch-up the start signal as it propagates through the delay line. The time resolution of Vernier TDCs is practically determined by the time difference between the delay lines $t_{res}=\tau_{1}-\tau_{2}$. If no gain calibration of the digitized value is intended, the choice of time delay and number of delay stages in Vernier TDCs should again be carefully considered. The chosen delay for the start pulse divided by the delay time difference should equal to the number of used delay stages, which to retain the binary coding, whould be $2^{N}$.
$$\frac{t_{ds}}{t_{d}}=N$$
The Vernier TDC principle is inspired by the secondary scale in calipers and micrometers for fine resolution measurements. It was invented by the French mathematician Pierre Vernier [2].
5. Time-to-Voltage Converter + ADC – the TVC architecture converts time interval into votlage. It is difficult to achieve a high time resolution with such schemes as traditionally the only reasonable way of converting time into voltage is by using a current integrator in which a capacitor is charged with constant current for the duration of the measured time interval.
After the time measurement is complete, a traditional ADC is used to quantize the integrated voltage on the capacitor.
These architectures could be used in applications where a mid- or large time measurement ranges are required. For practical reasons, the TVC type TDCs do not suit well as a time interpolator combined with counter based TDCs. The noise and linearity difficulties in current integrators for high speeds are setting a high lower bound for the time dynamic range of TVCs.
6. Time Differenec Amplifier based TDC – one concept for time difference amplification (TDA) in the digital domain is introduced by Abbas et al. [3]. An analog time stretcher allows for resolution improvement by amplifying the input time interval and successively converting it with a lower resolution TDC. The concept is similar to the analog voltage gain stages placed at the input of ADCs. A time amplifier based on the originally reported by Abbas et at. architecture [3] is shown below:
The circuit represents a winner-takes-it-all scheme, where the gates are forced in a metastable state performing a mutual exclusion operation, the consecutive output inverters apply regenerative gain to the MUTEX element. By using two MUTEX elements linked with the start and stop signal in parallel, with a time offset and then edge-combining their outputs one can amplify the initial edge's time difference. The difference in the output voltages of a bistable element in metastability is approximately $\Delta_{V} = \theta.\Delta_{t}.e^{1/\tau}$, where $\tau$ is the intrinsic gate time constant, $\theta$ the conversion factor from time to initial voltage change at the metastable nodes and $\Delta_{t}$ is the incoming pulse time difference [4]. We can note that the output rate of change at the metastable node is exponenitally dependent. Thus, if we intentionally delay "forwards" and "backwards" in time two MUTEX elements and combine their edges, we would acquire a linear time difference amplification caused by the logical edge combination. The output time difference in the corresponding case would be equal to $\Delta_{out} = \tau . ln(Td + \Delta_{in}) - \tau . ln(Td - \Delta_{in}) $. Several different variations of the latter circuit exist, most of which base on the logarithmic voltage dependency of latches in metastable state.
All time difference amplifier TDC approaches impose challenges related to the linearity of TDAs as well as their usually limited amplification factors.
7. Successive approximation TDC – this architecture uses the binary search algorithm to resolve a time interval. A principle diagram of a 4-bit binary search TDC is shown, as presented by Miyashita et al. [5].
Imagine that the stop pulse slightly leads the start pulse with time difference of $\Delta = 1.5\tau$. The arbiter D3 would then detect a lead and reconfigure the multiplexer delays such that they lag the start signal by $4\tau$, thus leading to a difference of $2.5\tau$. The resolved MSB in that case would be the inverted value of D3, or '0'. Further on, arbiter D2 detects a lead in the start pulse over the stop by $2.5\tau$, in which case it would reconfigure the multiplexer delays in stage two to lag the stop signal by $2\tau$. The MSB-1 value is the inverted value of D2 arbiter which would thus be '1'. The MSB-2 value would be equal to 0, respectively, as the stop signal now leads with $0.5\tau$. Finally, arbiter D0 dechipers a '1' due to the start signal now leading with $0.5\tau$.
This topology effectively utilizes a time-domain SAR scheme and has a time resolution of $\tau$. Open-loop binary search schemes as the propsed here [5] require good matching between the delay elements, which typically suffer from low PSRR. Nevertheless, this SAR scheme is relatively new to the TDC world and might provide us with promising food for future research. Compared to the Flash TDC, the SAR scheme offers a $2^{N}-N$ reduction of strobe latches.
8. Stochastic TDC – this family of TDCs uses the same core principle of operation as Flash TDCs, however, it is formed redundantly. Here is a sketch:
Because the Flash latches have an intrinsic threshold mismatch, they practically introduce a natural Gaussian dither into the resolution of each Flash bit. If we read out the values of all latches and average their values we can extract digitized values with sub quantized step resolution. In order take advantage of the oversampling, however, Flash TDCs require a large set of latches and delay elements, which comes at the cost of power. They are traditionally reserveed for DLL-type applications and often require a digital back-end calibration [6].
References:
[1] Ronald Nutt, Digital Time Intervalometer, Review of Scientific Instruments, 39, 1342-1345 (1968)
Luminance is a measure of the light intensity per unit area of light travelling at a given direction. In the case when light travels towards the lens of a camera, we can measure and calculate the so called log-average (or relative) luminance by finding the geometric mean of the luminance values of all pixels. If we deal with black and white images the luminance value is the actual pixel value. In a color image however, the relative luminance is a value modulated by a weighted average of the values of the color filters.
An accurate enough empirical formula that describes the relative luminance of a pixel in three colorimetric spaces (XYZ) such as the RGB standard is: $$ Y_{RGB} = 0.2126 R + 0.7152 G + 0.0722 B $$ Color spaces such as the RGB (the case of an uncompressed bitmap) that use the ITU-R BT.709 primaries, the relative luminance can be calculated with enough precision from these linear RGB components. For the YUV color space, we can still calculate the relative luminance from RGB as: $$ Y_{YUV} = 0.299 R + 0.587 G + 0.114 B $$
I am sharing a tiny C snippet that computes the relative luminance in a given bitmap image. It has two primary functions:
BITMAPINFOHEADER() — The bitmap pixel extraction algorithm reads the bitmap file image header, omits the meta/additional (compression) field and uses the header to calculate the exact address/location where the bitmap image data starts. Functions from the stdio library have been used for bitmap file input and fseek to extract the bitmap header information location.
ANALYZE() — The analyze function is reading the bitmap image data, with the help of a pointed out location from the previous BITMAPINFOHEADER() function and dumps the data into a one-dimensional char array.
MAIN() — main uses the created one-dimensional array from the analyze function to calculate the luminance for each pixel using the formula described earlier. A summation of the total luminance of all pixels is done and a calculation of the total number of pixels. These are then used for the final averaging function of the pre-computed weighted values: $Y_{avg}=Y_{tot}/N_{pix}$. A standard function from stdio (fprintf) is used for the final pixel log generation. The log file is in a *.csv style format with additional information of the color values of each pixel of the bitmap image. The end of the *.csv log file contains a value of the relative luminance value of the input image.
It should compile on pretty much any vanilla gcc.
a vision modulated by feedback from the surrounding world of research
The more you read about Bell Labs, the more you have the feeling that such places have vanished nowadays. But the world keeps rolling out new ideas, so where could the new nests of innovation be?
Between its formal foundation in 1925 and its decay in the 1990s, Bell Labs has been the best-funded and most successful corporate research laboratory the world has ever seen. At its heyday, Bell Labs was a premier facility of its type which led to the discovery and development of a vast range of revolutionary technologies. Being funded by one of the strongest co-governmental subsidiaries — American Telephone and Telegraph Company (AT&T) — Bell Labs was primarily involved in research towards improvement of telephone services and communication. Some of these developments found use not only in the communications, but also other fields of science. Although it is pointless to list all, some notable examples for technologies which found "dual purpose" are: the MASERs – amplifiers for long-distance links; Transistors – a generic information technology component; LASERs – originating from long-distance fiber optic links and has many other purposes; Radio telescopes (cosmic microwave background radiation) – noise in sattelite communications; Foundation of the theory of information – all fields of science; TDMA, CDMA, Quadrature AM, all basic foundation of comm theory; Laser cooling; CCDs – sprung out from their attempts to create a solid-state memory; Unix, C, C++ – all used in information processing nowadays; and many many more...
Bell Labs was a nest which developed brilliant minds, sustaining innovation traditions for over 60 years. Unfortunately, the state of the labs now is not what is used to be and currently, if not fully defunct, it crawls at a much slower pace. There are many theories and speculations as to why Bell Labs got defunct, but instead of digging into history demistifying what went wrong, let's focus our lens into the new age.
Heads up! Such nests are still around, it's just most aren't as big and prolific as they used to be in those days. There have been a couple of shifts in the research world, mainly being that after the end of the cold war primary research is no longer a tax write-off which reduced the general monetary stimulus of such institutions. Many of those big conglomerates have since shut down or at their best split up, so they work only on areas where their division makes money. It is hard for us to admit it, but a large portion of modern innovations including primary research happens in start-ups. In big companies you will still find intrapreneurs and individual hooligans who do what they want, but it's harder to find someone who will pay you to lurk around and do blue sky R&D all day. Unfortunately every now and then, closing R&D groups lets CEOs cut a large expense, raise the stock price and get the hell out before the company sinks because it has no R&D.
There are still quite a few governmentally funded labs and institutes, but because there is no strict control on spendings and technology deployability, these places end up doing research which is way ahead of its time. So far, that some of their claimed purposes have not yet been even written in modern science fiction books. Take the recent growth of interest to quantum computing and photon entanglement. These days it is not rare to see quantum field researchers hoping to get more taxpayer money, so they could shoot a single photon in the sky and receive it with a satellite. Such examples of research may sound ridiculous with today's state of technology, but the truth is that Bell and any other successful institutes have always spent budget on black hole research and that's inevitable.
On the other hand, the research in startups (and I am not talking about the numerous bonanza phone app firms) faces huge obstacles before it reaches a somewhat stable state which would allow for further growth. There is a plenty of innovative ideas out there that need to be advanced in order to become a reality, but because of all that time spent hashing out business plans and "evidence-based elevator pitches" (yes that's what they call them) is totally wasted and your core idea turns out to be infeasible because you are already late or can't keep-up dealing with venture capitalists, or even worse – startup incubator funding programmes. These obstacles make primary research in this new era somewhat difficult, or rather the word to use is "different". Researchers no longer have to deal only with technical issues, but on top of that they need to cope with the dynamics of research with a destiny highly dependent on quick near-future results. That is to say, to be successful, modern-day research needs highly flexible and broadband personalities having that helicopter view on all aspects of their findings. To achieve this, many successful research groups aim to attract the best and brightest minds, with a diversity of perspectives, skills and personalities – including a mix of some who "think" and many who "do", but all with exceptional ability. Focusing on real world problems, with the ideal that the bigger the mountain to climb, the better works to some extent; only if there is sufficient funding for over a 5+ year period as anything less than that builds-up pressure. The measure of success has been the hardest to define, but can primarily be based on the long-term impact of the disruptive innovations achieved.
But not only the dynamics of this new economic world have led to the end of large research centres. Sciences themself have become more diverse and share less in common. For example, although image sensors and microprocessors could still be groupped to the field of microelectronics and were invented hand-in-hand at big national laboratories, nowadays both are dramatically different and share little in common.
Consequently — with all of that said — for better or worse, it is likely that most of us would one day end-up working in a small group, isolated from our neighbours. Dreaming of huge research facilities never hurts, but researching what else is there would not hurt you either.
Currently there are just a few chip dissector enthusiasts sharing freely their findings with others. But two-three years ago, we didn't even have a single open chip database. Till this day the two biggest chip die microphotograph databases are zeptobars.ru and siliconpr0n.org. I was browsing around their chip database these days, and I am getting to see lots of inventive design approaches. Sometimes you can clearly identify that some chips have gone through several really messy metal mask fixes, and astonishingly such designs have still been shipped to mass production the way they are.
As the guys from siliconpr0n.org are providing their microphotographs under a creative commons commercial license (wow! even commercial! you guys rock!), I got tempted to use one of their chip photographs to create an infographic, denoting various details around the design. I chose a simple chip, which is a custom 65CX02 microcontroller used in a pocket game (it is not sure which one, tetris maybe?) using 3 Metal and 1 Poly process. It is hard for me to identify the exact process node, however, judging by the bonding pad size my guss is that it is somewhere around 0.8 um CMOS, if not even larger.

My first intention was to stick with red labels only, denoting analog design related details, however, as this chip is almost "purely digital" there is nothing much to comment, which lead to empty gaps. Well, I filled-in the gaps with blue labelling denoting my assumptions on the basic sub-blocks of this microcontroller.
The nicest detail I like about that chip is the tapered star connection at the bottom, which assures minimal noise on the oscillator circuitry, which is pretty much the only analog block in this microcontroller and uses the same digital supply as the rest of the core logic. Here's a zoom:

Some thoughts arose that it is clear — whoever designed this had an idea of what he was doing. Otherwise he could have easily merged the metal rails, it is easier to design it that way, so why bother splitting. Unfortunately it is not a very common scene, seeing pure layout engineers implementing such tricks at every level of abstraction in the design. In the case with this chip, it is kind-of clear that this was a decision taken at a top level.
On the left you can also see split power to the analog blocks as well. What is also interesting is that I can't identify passive ESD diodes for the power rails, what is visible though, is a tiny active ESD protection on top of the chip rail. It might be that parts of it are also circuits assuring correct power-on sequence, or it might be that there is power ESD but I am not seeing it under the metal padding.
I am leaving you now to explore more chips yourself in the databases.
Dear handful of science geeks! As the title suggests, I have finally decided to move on a separate domain. In fact, I should have done that six years ago when I started-up this page as a "joke", which was supposed to be a "just temporary" solution to hosting "stuff" online.
I am very happy to introduce you to transistorized.net although, I am still not extremely delighted with the name I picked some time ago, again as a "joke". But let's not care about names, as time goes I'm sure we'll start liking it better.
And also, please, stop me if I ever decide to start fiddling with these shell scripts and bodge work again! I should've migrated to Wordpress a long time ago, but ahh, you youngsters nowadays... This actually gave me the inspiration for another purely philosophical post on the incremental improvements in EDWARDS... let's see...
P.S. EDWARD = Engineer Doing Wildly Awkward and Recurring Drudge