Luminance is a measure of the light intensity per unit area of light travelling at a given direction. In the case when light travels towards the lens of a camera, we can measure and calculate the so called log-average (or relative) luminance by finding the geometric mean of the luminance values of all pixels. If we deal with black and white images the luminance value is the actual pixel value. In a color image however, the relative luminance is a value modulated by a weighted average of the values of the color filters.
An accurate enough empirical formula that describes the relative luminance of a pixel in three colorimetric spaces (XYZ) such as the RGB standard is: $$ Y_{RGB} = 0.2126 R + 0.7152 G + 0.0722 B $$ Color spaces such as the RGB (the case of an uncompressed bitmap) that use the ITU-R BT.709 primaries, the relative luminance can be calculated with enough precision from these linear RGB components. For the YUV color space, we can still calculate the relative luminance from RGB as: $$ Y_{YUV} = 0.299 R + 0.587 G + 0.114 B $$
I am sharing a tiny C snippet that computes the relative luminance in a given bitmap image. It has two primary functions:
BITMAPINFOHEADER() — The bitmap pixel extraction algorithm reads the bitmap file image header, omits the meta/additional (compression) field and uses the header to calculate the exact address/location where the bitmap image data starts. Functions from the stdio library have been used for bitmap file input and fseek to extract the bitmap header information location.
ANALYZE() — The analyze function is reading the bitmap image data, with the help of a pointed out location from the previous BITMAPINFOHEADER() function and dumps the data into a one-dimensional char array.
MAIN() — main uses the created one-dimensional array from the analyze function to calculate the luminance for each pixel using the formula described earlier. A summation of the total luminance of all pixels is done and a calculation of the total number of pixels. These are then used for the final averaging function of the pre-computed weighted values: $Y_{avg}=Y_{tot}/N_{pix}$. A standard function from stdio (fprintf) is used for the final pixel log generation. The log file is in a *.csv style format with additional information of the color values of each pixel of the bitmap image. The end of the *.csv log file contains a value of the relative luminance value of the input image.
It should compile on pretty much any vanilla gcc.
a vision modulated by feedback from the surrounding world of research
The more you read about Bell Labs, the more you have the feeling that such places have vanished nowadays. But the world keeps rolling out new ideas, so where could the new nests of innovation be?
Between its formal foundation in 1925 and its decay in the 1990s, Bell Labs has been the best-funded and most successful corporate research laboratory the world has ever seen. At its heyday, Bell Labs was a premier facility of its type which led to the discovery and development of a vast range of revolutionary technologies. Being funded by one of the strongest co-governmental subsidiaries — American Telephone and Telegraph Company (AT&T) — Bell Labs was primarily involved in research towards improvement of telephone services and communication. Some of these developments found use not only in the communications, but also other fields of science. Although it is pointless to list all, some notable examples for technologies which found "dual purpose" are: the MASERs – amplifiers for long-distance links; Transistors – a generic information technology component; LASERs – originating from long-distance fiber optic links and has many other purposes; Radio telescopes (cosmic microwave background radiation) – noise in sattelite communications; Foundation of the theory of information – all fields of science; TDMA, CDMA, Quadrature AM, all basic foundation of comm theory; Laser cooling; CCDs – sprung out from their attempts to create a solid-state memory; Unix, C, C++ – all used in information processing nowadays; and many many more...
Bell Labs was a nest which developed brilliant minds, sustaining innovation traditions for over 60 years. Unfortunately, the state of the labs now is not what is used to be and currently, if not fully defunct, it crawls at a much slower pace. There are many theories and speculations as to why Bell Labs got defunct, but instead of digging into history demistifying what went wrong, let's focus our lens into the new age.
Heads up! Such nests are still around, it's just most aren't as big and prolific as they used to be in those days. There have been a couple of shifts in the research world, mainly being that after the end of the cold war primary research is no longer a tax write-off which reduced the general monetary stimulus of such institutions. Many of those big conglomerates have since shut down or at their best split up, so they work only on areas where their division makes money. It is hard for us to admit it, but a large portion of modern innovations including primary research happens in start-ups. In big companies you will still find intrapreneurs and individual hooligans who do what they want, but it's harder to find someone who will pay you to lurk around and do blue sky R&D all day. Unfortunately every now and then, closing R&D groups lets CEOs cut a large expense, raise the stock price and get the hell out before the company sinks because it has no R&D.
There are still quite a few governmentally funded labs and institutes, but because there is no strict control on spendings and technology deployability, these places end up doing research which is way ahead of its time. So far, that some of their claimed purposes have not yet been even written in modern science fiction books. Take the recent growth of interest to quantum computing and photon entanglement. These days it is not rare to see quantum field researchers hoping to get more taxpayer money, so they could shoot a single photon in the sky and receive it with a satellite. Such examples of research may sound ridiculous with today's state of technology, but the truth is that Bell and any other successful institutes have always spent budget on black hole research and that's inevitable.
On the other hand, the research in startups (and I am not talking about the numerous bonanza phone app firms) faces huge obstacles before it reaches a somewhat stable state which would allow for further growth. There is a plenty of innovative ideas out there that need to be advanced in order to become a reality, but because of all that time spent hashing out business plans and "evidence-based elevator pitches" (yes that's what they call them) is totally wasted and your core idea turns out to be infeasible because you are already late or can't keep-up dealing with venture capitalists, or even worse – startup incubator funding programmes. These obstacles make primary research in this new era somewhat difficult, or rather the word to use is "different". Researchers no longer have to deal only with technical issues, but on top of that they need to cope with the dynamics of research with a destiny highly dependent on quick near-future results. That is to say, to be successful, modern-day research needs highly flexible and broadband personalities having that helicopter view on all aspects of their findings. To achieve this, many successful research groups aim to attract the best and brightest minds, with a diversity of perspectives, skills and personalities – including a mix of some who "think" and many who "do", but all with exceptional ability. Focusing on real world problems, with the ideal that the bigger the mountain to climb, the better works to some extent; only if there is sufficient funding for over a 5+ year period as anything less than that builds-up pressure. The measure of success has been the hardest to define, but can primarily be based on the long-term impact of the disruptive innovations achieved.
But not only the dynamics of this new economic world have led to the end of large research centres. Sciences themself have become more diverse and share less in common. For example, although image sensors and microprocessors could still be groupped to the field of microelectronics and were invented hand-in-hand at big national laboratories, nowadays both are dramatically different and share little in common.
Consequently — with all of that said — for better or worse, it is likely that most of us would one day end-up working in a small group, isolated from our neighbours. Dreaming of huge research facilities never hurts, but researching what else is there would not hurt you either.
Currently there are just a few chip dissector enthusiasts sharing freely their findings with others. But two-three years ago, we didn't even have a single open chip database. Till this day the two biggest chip die microphotograph databases are zeptobars.ru and siliconpr0n.org. I was browsing around their chip database these days, and I am getting to see lots of inventive design approaches. Sometimes you can clearly identify that some chips have gone through several really messy metal mask fixes, and astonishingly such designs have still been shipped to mass production the way they are.
As the guys from siliconpr0n.org are providing their microphotographs under a creative commons commercial license (wow! even commercial! you guys rock!), I got tempted to use one of their chip photographs to create an infographic, denoting various details around the design. I chose a simple chip, which is a custom 65CX02 microcontroller used in a pocket game (it is not sure which one, tetris maybe?) using 3 Metal and 1 Poly process. It is hard for me to identify the exact process node, however, judging by the bonding pad size my guss is that it is somewhere around 0.8 um CMOS, if not even larger.

My first intention was to stick with red labels only, denoting analog design related details, however, as this chip is almost "purely digital" there is nothing much to comment, which lead to empty gaps. Well, I filled-in the gaps with blue labelling denoting my assumptions on the basic sub-blocks of this microcontroller.
The nicest detail I like about that chip is the tapered star connection at the bottom, which assures minimal noise on the oscillator circuitry, which is pretty much the only analog block in this microcontroller and uses the same digital supply as the rest of the core logic. Here's a zoom:

Some thoughts arose that it is clear — whoever designed this had an idea of what he was doing. Otherwise he could have easily merged the metal rails, it is easier to design it that way, so why bother splitting. Unfortunately it is not a very common scene, seeing pure layout engineers implementing such tricks at every level of abstraction in the design. In the case with this chip, it is kind-of clear that this was a decision taken at a top level.
On the left you can also see split power to the analog blocks as well. What is also interesting is that I can't identify passive ESD diodes for the power rails, what is visible though, is a tiny active ESD protection on top of the chip rail. It might be that parts of it are also circuits assuring correct power-on sequence, or it might be that there is power ESD but I am not seeing it under the metal padding.
I am leaving you now to explore more chips yourself in the databases.
Dear handful of science geeks! As the title suggests, I have finally decided to move on a separate domain. In fact, I should have done that six years ago when I started-up this page as a "joke", which was supposed to be a "just temporary" solution to hosting "stuff" online.
I am very happy to introduce you to transistorized.net although, I am still not extremely delighted with the name I picked some time ago, again as a "joke". But let's not care about names, as time goes I'm sure we'll start liking it better.
And also, please, stop me if I ever decide to start fiddling with these shell scripts and bodge work again! I should've migrated to Wordpress a long time ago, but ahh, you youngsters nowadays... This actually gave me the inspiration for another purely philosophical post on the incremental improvements in EDWARDS... let's see...
P.S. EDWARD = Engineer Doing Wildly Awkward and Recurring Drudge
I have been honoured to be the thesis supervisor of one of our brightest fourth year students, who did an excellent job helping me out with some pad and ESD protection designs last summer. He will be working on column-parallel ADCs, thus, I've decided to put-up a quick introductory summer reading list on SAR ADCs.
One might say that we are flooded with information nowadays for which he may totally be right, however, the field of VLSI design is still kept in secrecy and large data converter systems are often considered a mystery by newcomers. What's most important with introductory books in nieche fields is that they keep the details out of band, but yet try to maintain a colorful backbone such that the reader doesn't get bored. Because nieche field literature is ususally developed by narrow-field specialists, it isn't rare that we see papers unsuitable for undergraduate education. In fact, the greatest knowledge gap in university education is between the under - post graduate studies, hence, to make the climbing slope milder here are my suggestions for the area of image sensor converters.
Perhaps one should start by having a look at Walt Kester's introductory notes on Successive Approximation ADCs - ADC Architectures II: Successive Approximation ADCs.
After examining the fundamentals, I would head the list with a 200+ page PhD thesis by Albert Chang from MIT on "Low-Power High-Performance SAR ADC with Redundancy and Digital Background Calibration". Albert offers excellent introductory chapters (A! LOT!) on the successive approximation algorithm and the digital arithmetic, all of which is presented under the light of actual transistor-level schematics.
One of the earliest reported column-parallel SAR ADCs used in an image sensor lies in Eric Fossum group's paper entitled: "CMOS Active Pixel Sensor with On-Chip Successive Approximation Analog-To-Digital Converter" published in the Journal of Electron Devices, Oct 1997. This paper shall provide you with an insight on actual implementation details and the specifics of the column-parallel capacitor layout of the bridge capacitor DACs.
A more modern and representative paper is: "Low-Power CMOS Image Sensor Based on Column-Parallel Single-Slope/SAR Quantization Scheme" by Tang et al. where they offer a clasisc two-step data conversion using a single-slope for the fist 3 MSBs and an 8-bit SAR scheme for the LSBs respectively.
The reading matetial on this line may be too advanced for a thesis reading, but I am listing it here, as it offers an elegant scheme coping with physical process defects and capacitor mismatch. A Low-Power Pilot-DAC Based Column Parallel 8b SAR ADC With Forward Error Correction for CMOS Image Sensors by Denis Chen from SSIS.
A few extra reading (which you may want to actually start with first) is an application note by Texas Instruments on Understanding Data Converters. There's tons of information on data converter fundamentals online, but I also find these notes from Boris Murmann from Stanford to be very clean: VLSI Data Conversion Circuits. And finally, a nice book edited by some of my friends from Linköping: CMOS Data Converters for Communications by Gustavsson, Mikael, Wikner, J. Jacob, Nianxiong Tan.
There might be more and even better introductory reading material, however, I am sure the provided above would source you with plenty of references, and if you have any suggesstions for more interactive literature don't be shy to use the comments so I can add it up.