A funny story. Today I happened to be around the lab used to inhabit back in 2014-2015 with Oscar and the MCAD lab ratz. A smudgy pic, here:
Back in the day Oscar was actively involved in a low-cost X-ray camera project that was supposedly going to make the world a better place. One of the challenges was to find vendors of some alien technology crystals, willing to collaborate on the project with us. As a very open and communicative guy it wasn't long after he found a few companies ready to send their quotes. Here is the place to mention that even a tiny chipped piece of these scintillating materials costs a fortune, so no sober company would have wanted to send out a free sample to a random guy at some university. Oscar kept bumping into a dead end with free sample denials which at some point led to his frustration. It was time to change strategy.
And so Mr. Sievert was born. As a professional in his 40s, Mr. Sievert was a successful manager in a prosperous company doing cutting-edge miracle devices for the medical industry. Coincidentially he happened to be doing this together with some folks from the local university. He was a confident man, who knew what he wanted, and if his project was successfull he was ready to order a stackfull of crystals. Mr. Sievert shared an office with the MCAD lab ratz, and also happened to have the same telephone number as them. See where I'm going? Oscar forged some clever fiction, just as we've been taught in academia.
It wasn't long after our phone transformed into some kind of a hotline receiving a few calls daily. Oscar now got showered with sample offers, most of them paid, but still, there was some free lunch from one company too, which in the end eventually led to the termination of his quest for scintillators. Unfortunately, the project was doomed from the very beginning as the core idea wasn't really feasible with the technology we wanted to use, which in itself is another comical story to cover some other time.
And so, the project was over. But hey, Mr. Sievert was not forgotten - at all. The phone kept ringing daily with people looking for an imaginary character that never existed, but shared a name with the great Rolf Maximilian Sievert. It was funny at the beginning as nobody in the lab except us knew what the Mr. Sievert story was about. In the summer of 2015 Oscar completely abandoned the X-ray project and moved over to an AI research lab, while I went to try my luck working in isolation on a tropical island. Mr. Sievert was long forgotten, or so we thought!
Two years later I came back to the same old lab. In the meanwhile the place got refurbished, people had changed, but the landline phone remained there. And so, during a hot summer day of 2017 the phone rang. As usual I waited a bit and picked up the phone, guess what?!
— Hello! This is Lisa calling from Ducky Duck Crystals Ltd. Can I speak to Mr. Sievert?
Attenzione! Stark Zynismus ahead!
I'm at the department, drinking coffee in the common room while browsing the web.
All of a sudden my auditory sensors detect the phrase: "this could be a very strong paper"... The word combination drags my attention and curiosity — so I continue to listen:
— we should stack those thin-film elements, then complicate the study a bit more by measuring the temperature
— indeed, this could be a very strong paper
— we'll easily have it accepted at the international banana conference this fall
— let's enhance it a bit by adding a processor to measure and compute the data on-chip, real time, that'd be just cool to have
— yes yes, this could be a very strong paper
— finally we'll finish writing that other proposal, I've been so frustrated with this recently
That was just a regular ordinary postdoctoral researcher conversation on a Tuesday afternoon. Now my question to you reader is: What's wrong with this dialogue? I think the word stream in the yellow box above is so wrong, at so many different levels, that I just don't know where to begin.
I want to remain positive and don't really want to become one of those regular angry nerds bashing everything and blaming everyone. But, I just can't resist putting down a few thoughts about that crazy academic world.
Academics nowadays have become paper monsters, all they see is papers, they daydream of papers, their final work goals are solely focused on papers. Hence, we hear thought pathways such as: "this could be a very strong paper". Just imagine if the phrase was formulated as: "this could be very useful", or "if we're successful this could push the world be a better place". That'd be so cool to hear!
With all of that said, it's evident that a large chunk of academic research nowadays is corrupted. But why? I constantly seek meaningful explanations, but somehow I always get stuck. To be fair, looking at the past years and extrapolating back in time, it seems like, despite all of the junk work required to-be-alive by modern public schools, universities have brought about some genuinely remarkable discoveries. Though, I still think that these days research is better conducted in institutions other than universities, for which I scribbled some thoughts awhile ago, but for now let's get back to the topic and our yellow-boxed dialogue.
I'm looking at some data presented by Kendall Powell in his Nature article "The future of the postdoc". Ah well, Kendall's article does not point to a single reference supporting his plots, but let's just assume that the trend he states is about right. Have a look at the plot below. It shows some crude statistics about the number of postdocs as a function of time.
So what do we have here? A generous four-times growth of active postdoctoral researcher positions in just 30 years. Now, take into account that usually there are a few, say, on average, 5 PhD students per postdoc, and you'll quickly see that the growth of PhDs throughout the years has followed a geometric progression. I think it's no surprise we see such a high number of poorly executed work. And I think that's not because the number of doctoral students has increased, but likely due to inefficient spendings. The whole system of grant applications and distribution has some flaws which nowadays drive scholars into writing science fiction in their applications to unprecedented levels.
I think here lies the answer to why the casual cafeteria dialogue. There is simply too much funding and too many researchers trying to get hold of it. Think of it as having a full pack of candy. The more you have, the more everybody wants to have of it, the more you're willing to give. The lesser the candy, the more cautious you are with giving it away.
But yeh, who am I to throw such wisdom, as I'm a part of the same crowd, and yet another guy from the pile of the to-be doctors.
Last year on July 18th I successfully passed my confirmation of status and was officially entiteled a DPhil candidate. During that hot summer day I had to set a my own final submission date for the thesis. I set March 9th, thinking that there's plenty of time to write up and finalize the thesis. Having to set such strict goals is kinda odd, I know, but the Oxford system is a but unusual in many ways.
Anyway, yesterday March 7th (just two days before the deadline), I submitted a copy of my thesis to the examination schools. Now that the examiner arrangements and hassles with timings, schedules, etc. are over, all I have to do is wait until they read my work and agree on a date for the official viva.
It's a bit early to be celebrating anything, as it is not yet known whether I would actually be given a degree or not, but I thought I'd put this here to mark that milestone. Here are some selected pictures from yesterday. Before going for a pint with colleagues at the Turf tavern, I had a decently long walk around university parks and Worcester college, which is a very peaceful place in the heart of the city. Looking at the ducks and the calm wildlife I realized that, actually, all of that PhD stuff is irrelevant. Life goes on, and the squirrels are happy with what they have: trees, acorns and sun... we should be too.
Awhile ago I had a quick look at ionizing radiation detector types as was exploring some opportunities for creating a miniature solid-state gamma detector using a standard CMOS process. Well, it turned out that achieving a wide energy detection range, as well as a reasonable enegry resolution (in the case of an integrated spectrometer) with silicon as the primary detection material is not that straight forward. I somehow got dragged by other projects and since I had some accumulated material I thought I'd export it here. I have added some extra comments within my slides to form a quick overview of ionizing detection with solid-state instruments.
I highly recommend reading Radiation Detection and Measurement by Glenn Knoll which is a fantastic book if you would like to dig deeper into the subject. I must state that some figures presented here are a verbatim copy of his book. Oopse, thought it isn't that big of a crime to paste a few figs here.
BASIC PARTICLES AND INTERACTIONS
Let's begin with a brief intro to the types of ionizing radiation. Typically one could categorize high-energy ionizing radiation into charged and uncharged.
Charged particles are formed by electrons, positrons or nuclei. Accelerated electrons or positrons from element decay are said to emit beta radiation. Heavier charged particles e.g. comprising a Helium nucleus of 2 protons and 2 neutrons are said to emit alpha particles. Ionizing radiation could be emitted in the form of electromagnetic radiation with very short wavelengths which is known as gamma rays or photons. Yes, the dual slit experiment and the principles of quantum mechanics hold true even for very high energy gamma rays, no surprise! Ionizing radiation could also be emitted by neutrons being a result of some form of nuclear fission or fusion.
All atoms heavier than lead (element 82) naturally emit all types of ionizing radiation due to instability and decay, although some elements and their isotopes are pronounced emitters in specific radiation type (alpha, beta, gamma). Typically when speaking of low-energy X-ray photons one usually refers to gamma rays with an energy range of 100 eV to 100 keV. Anything beyond 100 keV is usually considered to fall in the high-energy range and is usually more often referred to as gamma, rather than X-rays. The nuclear science field usually uses eV as a unit to express energy instead of the gamma photon wavelength, although both are okay to be used if referring to electromagnetic rays only.
Itis worth formulating informally that one charged particle could cause a secondary emission of another charged particle when colliding with atoms in a stopping medium. For example, when one alpha particle (a He nucleous) collides with an electron, its impulse could accelerate the electron and thus create an "energetic particle". The former is known as Auger or annihilation radiation which is a fairly common mechanism. Various other decay related radiation mechanisms exist which are not included in this review for simplicity. However, what should be noted is that all mechanisms could be characterized by measuring irradiation magnitude for each individual energy band, which is also known as gamma spectroscopy. Using these methods individual particles and their types could be identified using indirect methods by looking at the spectrum footprint.
Qualitatively speaking, individual particle types interact with matter differently. The figure above shows the specific energy loss in air for a few types of particles. The figure below shows the typical alpha particle penetration depth in air.
One could observe that low energy alpha particle are easily stopped by air. The penetration dependence of energy follows a linear function. Both plots were taken from Glenn Knoll's Radiation detection and measurement.
But let's also have a look at the theoretical penetration depth in Silicon, figure below is also adopted from Knoll.
What these plots tell us is that the radiation absorption rate of materials primarily depends on the weight of the element. The energy penetration dependency follows a linear law. Crudely speaking the heavier the element the higher the chance of a collision with the nucleus or electron cloud.
We will now focus on a high-level overview of some of the basic gamma-ray detection mechanisms in solids.
We could divide detector interactions in three sub-groups. Photoelectron generation, Compton scattering and Pair production.
Photoelectron generations with gamma rays are equivalent to the photoelectric effect that occurs with visible light. In fact the process is the same and with ionizing radiation it typically occurs with low-energy irradiations. An emission of electrons is generated when a gamma photon collides with electrons or other free carires.
Compton scattering is a process at which a partial energy transfer occurs and a secondary scattered photon is generated. Usually the scattered photon could end up colliding with a free carrier and contribute to photoelectron production, while the remaining high-impulse carrier from the collision contributes to further interactions.
Pair production is a process occurring at high energy levels of over 1.02 MeV. The process occurs when a high-energy particle collides with a nucleous which produces an electron - positron pair.
The typical dominance of the three gamma-ray interactions as a function of their energy is shown in the figure below.
DETECTION OF IONIZING RADIATION
In this section we explore the basic methods for ionization radiation detection. We first focus on traditional (non solid-state methods) and later on put the focus on solid-state junction based detectors.
The simplest (and one of the oldest) methods for ionizing radiation detection is, as it's name suggest, the ionization chamber. This is a device which detects ionization events by passing an electric current through electrodes in a (typically) pressurized chamber. Any ionization event contributes to an increase of charge transfer between the electrodes and effectively the influences the current passing through the chamber. There are various configurations which are quite well introduced in Glenn Knoll's book.
The ionization chambers are typically operated in the so called proportional region of their transfer characteristic. This is also the most linear region, which is also why the particle energy discrimination is superb in their proportional (linear) region. When operated at elevated electrode voltage, an avalanche ionization is elevated, which is known as a Geiger-Muller region. Although, there is a substantial difference between the construction and gas content of an ionization detector and an Geiger-Muller detector, which is presented in the next figure.
The primary and most basic difference between an ionization and Geiger detector is the existence of a stopping (quenching) gas in Geiger detectors. In the presence of a high electric field avalanche ionization is inevitable, which continues so long at there electric current is present. While Geiger-Muller tubes are driven by a high-impedance voltage source, which helps in quenching ionizations, it is also essential that the process is sped up by another physical mechanism. This is where the quench gas (usually noble gas) helps in dampening ionizations and speeds up the recovery and hence operating dynamic range of the Geiger tube.
The material of the outer wall of Geiger detectors usually also acts as a conversion medium by releasing recoil electrons and stimulating an ionization avalanche. The conversion efficiency of typical Geiger Muller tubes varies between 0.5 and 2%. However, its transfer as a function of the incident gamma ray energy is not flat. An example transfer function of usual Geiger Muller tubes with different wall materials is shown in the figure below.
The poor flatness of the conversion efficiency of GM tubes leads to inaccurate dose measurements if left uncorrected. In order to equalize the sensitivity of GM tubes within the measured energy range, an absorption material (usually Aluminium, Copper, or Lead) could be positioned at the input window of the tube. This absorbs low gamma-ray energy and compensates the lower conversion efficiency of GM tubes at low-energy gamma radiation. This procedure is known as energy compensation and is introduced commonly in applications where false reading and low high tolerance of the dose rate provided by the detector in unacceptable.
Scintillation detectors are often used for high-energy radiation measurements. The basics of the concept are shown in the figure below.
Scintillators are a type of materials which exhibit luminescence when excited by ionizing radiation. Scintillating materials, when struck by an incoming particle, absorb its energy and re-emit the absorbed energy in the form of light. Sometimes, the excited state is metastable, so the relaxation back down from the excited state to lower states is delayed. Depending on the process there could various wavelengths emitted within a relatively narrow bandwidth.
Usually the re-emitted wavelength from typical scintillation materials falls within the visible light spectrum. A standard silicon photodiode or a sensitive photo multiplying tube could be used to detect re-emitted photons. Both organic and organic crystal scintillators exist, while they both share common intermediate states defect states within their bandgap.
One of the most common scintillator types is Sodium Iodide whose absorption efficiency is shown below.
Sodium Iodide has a characteristic footprint of releasing very wide band visible light. This forms a major drawback when the detector crystal is to be applied to gamma spectroscopy applications. On the other side of the material spectrum, one of the most efficient and narrowest re-emission bandwidth scintillator is Lanthanum Bromide, whose absorption rate is shown below.
Itis worth noting that the area of high-efficient scintillators is quickly evolving and crystals with an even narrower re-emission bandwidth could already exist.
Semiconductor detectors for ionizing radiation have been used since the discovery of the photoelectric effect. Diode-based detectors are typically preferred with cost if high issue, or when fine energy resolution is required by the application.
P-N junction based detectors are typically operated in reverse bias mode, thus in the third quadrant of their IV characteristic. At high radiation levels diode detectors could also be operated in the fourth quadrant in photovoltaic mode, although such configurations are uncommon.
The formed depletion region in reverse bias mode acts as a conversion layer as generated through p-e interaction electrons are swept by the strength of the electric field. Typically, the deeper the depletion region, the higher the probability for p-e generation. Usually, higher bandgap semiconductors are the preferred choice when room temperature operation is required, or the noise levels due to thermal generation are relaxed.
Except for the material's bandgap, the atomic number of the semiconductor also determines its sensitivity to high-energy ionizing radiation. Unfortunately, usually higher atomic number semiconductors also have low bandgap energy, which requires them to operate at cryogenic temperatures. The most common examples for such materials are Germanium (Ge) and Cadmium-Zinc-Telluride (CdZnTe).
In order to improve the signal-to-noise ratio of the detector, there are a few basic mechanisms through which one could maximize the charge collection and p-e efficiency. Using a thicker junction with wide depletion region, or even a fully-depleted substrate often guarantees maximum collection efficiency. Higher electric potentials also contributes to collection efficiency. Last, the use of high Z materials improves energy bandwidth, while their small bandgap guarantees higher energy resolution.
A useful nomogram adopted by Glenn and others showing the link between depletion region depth, voltage and detection probability is shown below.
By observing the nomogram, it is worth noting that with a depletion reverse bias voltage of about ~10V, which is overly optimistic for a modern standard CMOS process, one could not detect very highly energetic particles.
The PIN diode structure is typically the most common configuration within silicon-based detectors. Photoelectron generation is maximized to occur in the intrinsic region. One of the common challenges with silicon as a material is that its reverse bias dark current increases with impurities following an exponential law. Dark current degrades the energy resolution of the detector, which is why gamma spectrometers based on direct conversion junction detectors are often cryo cooled. However, it is worth mentioning that, even at room temperature, direct conversion detectors offer much higher energy resolution that scintillator-based detectors.
One major drawback of semiconductor-based detectors, compared to scintillation counters, is the lack of their wide bandwidth detection capability, as well as their poor linearity. Shown above is a comparison between Si, Ge and the most common scintillator material NaI. It is evident that the detection probability of Silicon virtually vanishes at energy levels beyond 100 keV.
PRACTICAL SEMICONDUCTOR DETECTORS
There are numerous designs and implementations of practical semiconductor-based detectors. Here I list a few common types tailored to bring an intro to the next section which discusses practical detectors implemented in standard CMOS VLSI processes.
Shown above is a standard diffused junction often implemented in standard CMOS processes. The latter is not practical, neither for low, nor high-energy applications. The N+ electrode formed on the surface of the depletion region blocks low-energy particles to reach and recombine in the depletion region. This reduces the sensitivity of the junction to low-energy radiation. On the other hand, silicon is not capable of detecting high-energy radiation, as high-energy rays can not recombine in the depletion region and simply escape the junction. This makes the standard diffused junction an unattractive solution to ionizing radiation detection.
Another candidate more suitable to low intensity radiation detection are the surface barrier junction detectors. The slide below shows the basic structure of a surface barrier junction, also known as metal-semiconductor junction.
The primary benefit of metal-semiconductor junction is the low number of impurities and defects associated with the semiconductor material due to the lack of additional implantation steps. The junction is formed though metal deposition of a thin metal electrode. This configuration in principle provides lower dark current, compared to implanted junctions, however it does suffer from metal deposition variations and oxidation, and hence has high variability. In addition, surface barrier junctions are typically not part of standard CMOS processes and require additional fabrication steps.
Another junction forming technology is ion implantation. The principles of detection with ion implantation formed junctions is similar to diffusion junctions. The only difference lies in the tight control of the N+ junction side, which could be very thin. This in terms allows for thin passage window for low-energy gamma rays, and thus an improved detection efficiency.
Ideally, to maximize the detection efficiency of the P-N junction, a fully depleted biasing structure as the sketch shown below could be used.
This configuration aims to maximize detection probability by physically widening the depletion region of the detector. This means that the depletion region spans up to the surface of the silicon wafer. An enhanced sensitive window is thus formed which allows the detection of even low-energy alpha particles hitting the surface of the detector. A set of fully depleted detectors could also be stacked to further enhance detection probability and allow for detection of compton scattered particles.
Another idea, which admittingly I haven't spotted in a CMOS implementation, is to use the same approach as the stacked junction technology introduced by Carver Mead and Foveon. Basically, particle recombination in each junction is collected depending on the particle energy. One could thus use such a hypothetical technique to enhance the energy resolution of the PN detection as well as to also improve collection efficiency. Albeit, all of that's kind of hypothetical if one looks at what could be done with typical CMOS wafer epi thicknesses. Perhaps a custom process could do though.
Above are shown some typical FSI/BSI CMOS image sensor junctions for radiation imaging. One could conclude that a fully-depleted back-thinned pixels could drastically improve the quantum efficiency of the junction at low-energy radiation levels. Incremental technological improvements are shown in the progression from left to right: Front-side illuminated pixels are inefficient due to the thickness of the dielectrics which insulate metal routing; Backside illumination is a natural progression to improve quantum efficiency, while backthinning further boosts quantum efficiency as the distance to the epi layer is even further reduced. Finally a full junction depletion forms a high-gain collection zone for low-energy alpha particles.
There are other semiconductor materials which are superior in their detection resolution, efficiency and bandwidth compared to Silicon. One of the second most common materials is CdZnTe. This material is formed by heavy atomic weight elements and can resolve high-energy rays. In addition, it is also a wide bandgap semiconductor which makes it more resistant to thermal fluctuations and could be operated even at elevated temperatures with high enough resolution.
On the other end of the bandgap spectrum, there lie the Germanium or high-performance Germanium or Lead Sulphide PbS detectors. These materials possess a narrow bandgap, which allows for finer energy resolution. This however, comes at the cost of thermal noise which eventually masks the signal and either requires cryogenic operation, or massive statistical oversampling to reach the desired energy resolution.
There are other types of semiconductor detectors, notably the recently grown field of avalanche diodes. Avalanche detectors use a very strong electric field to stay on the edge of avalanche breakdown. A particle could thus trigger an avalanche breakdown effect and discharge the junction and forming a countable impulse. The APD detectors operate in a similar way as Geiger-Muller tubes.
Other application specific detectors also include microstrip, drift, resistive charge division and other organic detectors. One particular type of detector which is suitable for integration with CMOS are the floating gate detectors. These are also known as oxide detectors as they utilize the gate oxide of a semiconductor MOSFET as a detection medium. A sketch of floating gate detectors is shown below.
Floating gate devices formed through STI isolation could also be used as low-cost dosimeters. Such structures resemble the type of floating gate devices used in DRAM/Flash technologies. The gate is reset and electrons are trapped at the isolated well. Once particles hit the well charge is transferred from the trapped area to the p-well and eventually lost. After the end of the integration time window the floating gate is read out through a readout FET device. The lost charge during integration shifts the equivalent threshold voltage of the readout FET and thus provides information about the accumulated dose.
This structure is also referred to as C-sensor. Below are provided some response graphs of implemented and measured devices by Pikhay et.al.
A FEW CMOS-BASED IMPLEMENTATIONS FOR DOSIMETRY
Another well popularized project based on floating gate FET detection is the MOSKIN dosimeter. It forms a miniature device aimed for clinical dosimetry in CT scan imaging.
The floating gate FET approach offers great benefits for miniaturization and cost reduction. The reported threshold voltage shift for the MOSKIN dosimeter is about 2.5 mV/cGy.
Another interesting lab-on-a-chip implementation for characterization of alpha-therapy pharmaceuticals is the one reported by Griffin et.al. This university project contained a set of multiple MOS-based detectors spread across a single silicon die. The isotope in the treatment/imaging drug is spread directly across the chip, allowing a direct measurement of the strength and fallout time of the isotope. Unfortunately, I'm not sure whether this detector approach has left the experimental lab testbench and if it has made it into mass market.
A similar strategy to the previous MOS-based lab-on-a-chip is the work reported by Rosenfeld et.al. Their work uses an array of avalanche detectors, and similar to a Silicon photomultiplier each individual junction adds up to a total combined integrated current to measure. The application of their work also aims isotope measurement through the microdosimeter array.
An UK-based company Kromek, also offers a set of 2D array devices for spectroscopy applications. They offer various devices, however I also suspect that some of them use the SPAD-based avalanche detection methods. Although, fine-energy spectrometry is perhaps only possible through standard integration-based pulse counting and discrimination. In the case of the shown above DANA II device, I suspect the detectors are some form of standard PN-junctions with fine current sensitive amplifiers and pulse discriminators.
Below is shown another device whose specifications point towards the use of PN devices with current sensitive amps and pulse discriminators. The detected energy range spans between 200 and 600 keV which points that the used semiconductor material is not likely to be silicon (even doped silicon), but rather some other material such as Ge or even CdZnTe.
Another impressive device is a Silicon detector with integrated readout electronics in a hybrid package provided by Teviso. According to the published specifications it has a linear response and could detect radiation energies from 50 keV up to 2 MeV, which is truly impressive for a silicon device.
I am still a bit sceptical when it comes to its noise floor and the listed measurement range of 0.1 to 100 mSv/h. The low-end of the spec is most likely valid for very long measurement periods. Below I've listed its count vs dose rate characteristics.
It could be noted that at background gamma radiation levels the sensor produces less than one count per minute (presuming at room temperature). This is why I think to get reliable information from the sensor at low radiation levels one needs to rely on massive oversampling, and thus slow measurement response.
Teviso have also included a block diagram which suggests a classic charge amplifier implementation and integration-mode readout.
There is plenty of devices and measurement techniques. Obviously, radiation detection within the short wavelength electromagnetic spectrum is enormous. This post aimed to provide a very quick (albeit messy) intro to ionizing radiation and some common methods for detection.
Other literature and references to some of the presented material in the pictures above is listed below:
Some of the presented material above was taken from Radiation Detection and Measurement by Glenn Knoll. I would highly recommend checking out his book which is a bible in the field of radiation detection. I also hope I haven't breached the copyright laws of John Wiley & Sons too badly.
I recently stumbled upon a wikipedia article about Rendition – a Silicon valley based company which flourished straight after its foundation but unfortunately existed only a few years before going downhill. What draws attention is the vigorous inverse of the company's growth which was triggered by a few transistors positioned at the wrong place and the wrong time. Let's have a look at an excerpt from the Wikipedia article – this paragraph was most likely edited by an insider as of the high level of detail.
Rendition was one step behind other competitors coming to market at a pivotal time in the 3D PC graphics engines battle. The NVIDIA RIVA 128 came to market in late 1997. The V2100 saw first silicon in early 1997, but was late to sample due to a digital cell library bug necessitating a respin. Rendition used the libraries developed by SiArch (licensed through Synopsys at that time) for their digital logic synthesis. A critical section of circuitry happened to synthesize into a 3 input nor-gate driving a scanned flip-flop. Apparently this combination was never spiced (an accurate circuit simulation engine) by SiArch. The scan-flop had three passive transmission gate muxes driven by the three n-type transistors in the NOR3, all in series. The result of this was excessive resistance with a weak bus-hold cell, which ate into the allowable noise margin and violated the static discipline in good digital logic design. This manifests itself as an intermittent bug that is seen in the lab but not in high level behavioral or even RTL or gate-level simulations. This root cause was only determined after months of investigation, simulations, and test case development in the lab, which narrowed the problem to a very confined space. At that point, the chip was run live under a scanning electron microscope using the oscilloscope probe mode to find the problem net between the NOR3 gate and the scan-flop. The combination was then spiced and confirmed to be the culprit. Two full quarters were lost due to this bug. Despite these delays, the V2x00 shipped with fully conformant OpenGL and D3D drivers.
In conclusion: even a 10 um rock could flip the cart.