The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Surface dynamics of charge carriers measured using an indirect technique

This is a quick follow-up on a previous post related to atomic manipulations with a scanning tunneling microscope. A few days ago Kris published a part of her work in Nature Communications — "Initiating and imaging the coherent surface dynamics of charge carriers in real space", wohooh, congratulations! I paid a visit to their lab back in 2013 (the time when she had a microscope with a gigantic chamber full of "dust", sadly no picture here) and in 2015 (when her group moved into the optical spectroscopy lab with a new high-performance mini-chambered microscope). I thought I'd continue the tradition of writing blue-sky nonsense on their work and put a few words from an ordinary person's perspective.

Apparently, plugging-in and out atoms with the tip of an STM is a lot more complicated than it looks at first sight. The tip itself is a source of charge carriers, as that's how the STM works. When the tip hovers above the scanned sample it causes carriers to tunnel through to the surface of the sample. Naturally, such charges can lead to a counter-reaction and disturbance of the spatial positions not only of the atoms directly underneath the microscope's tip, but also in a quite large radius around the tip itself. This paper describes the initial (at the start of injection) charge carrier dynamics on the scanning surface, which have been deduced by looking at the scattering patterns of toulene molecules. Oh well, or at least that is what I understood :)

The first figure in the paper shows images of Toulene molecules scattered on the surface of a Si(111) sample. A comparison between images before and after (direct?) atomic manipulations is shown. If one takes a bird's eye view on the Toulene molecules before and after the manipulation attempt, it seems like after the tip's bias has settled and reached a steady state, the Toulene molecules move in a systematic direction, that forces them to lump to each other. I.e. my interpretation is that in the before images Toulene is a lot more scattered, as compared to its position after manipulations. This change of position of the Toulene molecules on the surface of the sample is a source of information about the charge carriers' path (?) and hence behaviour. Right after the charge carriers reach the surface of the sample, they encounter a number of scattering events, before reaching a steady state (steady state = very vague expression). During that transitional time the carriers transfer their energy to the atoms on the surface, which leads to their movement and bond breaking. This movement gives indirect information about the dynamics of electrons/holes themselves.

Surely this paper is a lot more involved and is waay beyond my level of understanding of quantum physics to be able to summarize it in my own words which make sense. But, what really fascinates me — and that's why I started writing this ambitious post — is mankind's achievements in material science to this date. I mean, these things are roughly ~1 nanometer scale, and the hole/electron dynamics described occur in the order of femto seconds... i.e. that's easily deep inside the THz range. And when you add ambient temperature effects the picture becomes a complete mess, yet, it is quite well understood. On the other hand, carrier dynamics in the order of ~100s femtoseconds is not that fast, maybe that's why modern-day electronics is still struggling to conquer the THz gap?

Date:Sun Oct 2 14:58:51 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



Point Spread-Function, Optical and Modulation Transfer Functions

In solid-state imaging, the obtained image contrast and spatial resolution, apart from the electrical circuit performance with respect to crosstalk and linearity, vastly depends on the spatial organization of the light sensing pixels. An image sensor can essentially be looked into as if it was a non-ideal "electrical" transducer, distorting the fed through it light information. In the general case, if we exclude all electrical complexities, an imaging system's performance can be extremely complicated for evaluation also due to the numerous second order effects accumulated along the optical signal path. This implies that we need to apply some restrictions to the degrees of freedom in our analyses, and hence, apply a suitable to the particular application divide and conquer approach. This post aims to give an introduction to the Modulation Transfer Function parameter and its implication in image sensor optical array design.

With that respect, two fundamental properties in imaging systems can be identified:

1. Linearity — just as any other black-box system, it implies that the output corresponding to a sum of inputs should also be equal to the sum of the inputs processed individually.

2. Invariance — the projected image in the spatial domain remains the same even after the imaged object is moved to another location in "space".

Linearity is usually taken care of accurate readout electronics and photodetector design. Invariance, or spatial resolution, however, greatly depends on the chosen geometrical shape of the pixels, and their arrangement in space. A common benchmark parameter in imaging systems is their Modulation Transfer Function (MTF), which is to a large extent linked with the arrangement of the array elements. In order to continue with the definition and implications of MTF, we must first have a look at what a Point Spread-Function (PSF) in optics is.

Imagine you can find a point light source (e.g. a torch) with an infinitely small aperture, which you then point to the image sensor in your mobile phone camera. What you would expect to see on the display is the same point at the same location where you pointed the beam:

Beam with an infinitely small aperture projected on the image sensor plane

Real optical systems, however, suffer from optical imperfections, which result in smearing out of the energy around the infinitely small aperture beam you pointed towards the system, hence, yielding a loss of sharpness. The Point Spread Function provides a measure of the imaging system's smearing out at a single, infinitely small, physical point in the imaging plane. In the case of a real pixel in an image sensor, its PSF can be defined as a function of its effective aperture:

Effective spatial aperture definition of a pixel/photodetector

The PSF for the region within the photodiode is equal to some constant (modulated by the quantum efficiency of the photodiode), and for the region where the pixel is covered with the metal we can assume that the PSF is zero. Although, metal layers in modern integrated circuits are so thin, that photons can still tunnel through them. Then formally:

$$ S(x) = \begin{cases} s_{o} & x_{o} - L/2 \le x \le x_{o} + L/2 \\ 0 & \text{for } x \text{ is all others} \end{cases} $$

Which in simple words represents a boxcar function.

Point Spread Function for the above pixel is similar to a boxcar function

By knowing the Point Spread Function of the array, we can estimate the global Optical Transfer Function of the whole imaging system by using a simple mathematical exercise. Note that the PSF is a spatial domain function. Just as in electronics, apart from looking at signals in the time domain, frequency domain analyses prove to be extremely representative too. This is also the case with 2D imaging and their spatial and frequency domains. The Optical and Modulation Transfer Functions are usefully represented in the frequency domain. But what exactly is OTF and MTF? Let's have a look at the physical effects they describe, by examining the following pattern:

If we feed in an ideal optical pattern as the one shown at the top, due to the limited single photodetector aperture (opening/fill factor), the reconstructed at the output image would be smeared out. Just as the torch test with an infinitely small aperture. Depending on the effective aperture size (pixel fill factor) as well as the geometrical arrangement of individual pixels, the smearing would have a different magnitude for different spatial sizes. To simplify the optical system's evaluation, this smearing can be expressed in the frequency domain, which is done by the OTF and MTF. An imaging system can be viewed of as a low-pass filter in the frequency domain, thus OTF and MTF represent the system's amplitude-frequency characteristics.

Similar to electronics, the OTF can be derived by computing the Fourier transform of the Point Spread Function with is the equivalent to the impulse response in electronic linear time-invariant systems:

$$ S(f) = \int_{-\infty}^{\infty} S(x) e^{j 2 \pi f_{x}}~dx $$

Substituting with the PSF we get the following definition:

$$ S(f) = s_{o} \int_{x_{o}-L/2}^{x_{o}+L/2} e^{j 2 \pi f_{x}}~dx \propto \frac{sin(\pi f L)}{\pi f L} $$

Simple overview of spatial vs frequency domain in optical systems

What is the difference between OTF and MTF? OTF contains a negative and imaginary part from the fourier transformation and carries also phase information. MTF is defined as the ratio of the output modulation to the input modulation as a function of the spatial frequency, but normalized. It is typically expressed as [strips/cycles]/mm. Thus, to normalize, MTF is derrived from the absolute ratio of the OTF, normalized with the OTF at zero frequency (or DC in analogy with electronics).

$$ MTF(f) = \frac{|S(f)|}{|S(f=0)|} $$

Hence, the MTF is a sinc function:

$$ MTF(f) = \frac{sin(\pi f L)}{\pi f L} $$

Various pixel array configurations exist, which directly affect the MTF coming from the sensor's aperture. It is very important to note the aperture MTF is not the only source, nor a global MTF degradation parameter. In imaging of moving objects, such as Time-Delay-Integration imaging mode, discrete (temporal) MTF from object synchronization and misalignment also occurs. In addition, the lens also add-up to the total camera system's MTF. The good news is that as the MTF is normalized and dimensionless, we can easily multiply all MTFs of all sources of degradation to identify the global MTF. Again, just as in electronics, the MTF degradation order, as it progresses is important. Identical to electrical amplifiers and the Friis formula, we should always place the highest (if possible) MTF optical part to come first along the signal chain. Unfortunately, in optics swapping parts along the signal chain is a rather more difficult task, than it is with amplifier chains.

References:

[1] K. Rossmann et al., Tools for the study of imaging systems, Technical Report, Department of Radiology, The University of Chicago and the Argonne Cancer Research Hospital, August 1969.

Date:Sun Aug 11 10:34:27 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



A quick review of Time-to-Digital Converter architectures

I have been recently looking at various ADC architectures employing coarse-fine interpolation methods using TDCs, so I thought I'd share a quick list with brief explanation of the most common TDC architectures.

1. Counter based TDC – Probably the simplest method to digitize a time interval is by using a stopcounter. Counter based TDCs use a reference clock oscillator and asynchronous binary counters controlled by a start and a stop pulse. A simple timing diagram depicting the counter method's principle of operation is shown below:

Counter based TDC principle of operation

The time resolution of counter based TDCs is determined by the oscillator frequency and its quantization error is generally dependent on the clock period. Counter method TDCs offer a virtually unlimited dynamic range, as with the addition of a single bit the counter's capacity is doubled. This comes at the cost of an increased quantization error. The start signal can be either synchronous or asynchronous with the clock. In the case of an asynchronous start and stop pulses its quantization error can be totally random. Similarly if the start pulse is synchronous with the clock, this TDCs' quantization error is fully deterministic, if we exclude second order effects from clock and start pulse jitter respectively. With current CMOS process nodes 45nm or so, the highest achievable resolutions vary in the order of a few nanoseconds to 200 picoseconds. The counter based TDC's resolution is generally limited by the reference clock frequency stability and the metastability and gate delay of the stop latches. To increase the counting speed an often reffered to as Double Data Rate (DDR) counting scheme can be employed, which uses a counter incrementing its value on both rising and falling edges of the count clock.

2. Nutt interpolated TDC (Time Intervalometer) – initially proposed by Ronald Nutt [1], the Nutt architecture is based on measuring the quantization errors formed in the counter method and subtracting these to form the final counter value. The sketch below depicts the basic principle of the Nutt method:

Principle of Nutt interpolation method

Typically a short-range TDC is used for synchronous fine quantization error measurement between the counter's clock and the stop pulse. The used short-range TDC as a fine quantizer can be of any type as long as its input range matches with the largest expected quantization error to be measured, which is the counter's clock period $T$. In case of the use of a Double Data Rate (DDR) counter the input range is reduced to $T/2$. The global TDC presicion of a TDC employing the counter scheme with Nutt interpolation is improved by a factor of $2^{M}$ where M is the resolution of the fine TDC in bits.

The challenges the design of Nutt interpolator based TDCs are generally linked with the difficulty of matching the gain of the fine TDC with the clock period of the coarse counter. Both INL and DNL of the fine interpolation TDC is translated as a static DNL error at the combined output. Moreover, any noise in the fine TDC also translates as a DNL error in the final TDC value, which creates a non-deterministinc DNL error performance which cannot be corrected for.

3. Flash TDC – This is probably one of the simplest short-range TDC architectures. It uses a clock delay line and a set of flip flops controlled by the stop pulse for strobing the phase-delayed start pulse.

Basic Flash TDC core architecture

It can be employed in a standalone asynchronous scheme, where the start pulse is fed to the delay line and the stop pulse gates the flip flops. Alternatively it can be used as an interpolator to the counter scheme, in which case it's start pulse is controlled by the low-frequency count clock. In the case of the last configuration it is important that the sum of the delays in the delay line matches with the clock period $T$, or $T/2$ in the case of a DDR counting scheme.

Typically one way of delay synchronization is the deployment of a phase locked loop, which keeps the last delayed clock in-phase with the main count clock period. Usually the choice of delay number is based on binary sets of $2^{N}$. The strobed value in the flip flops is thermometer coded, which is consecutively converted to binary and added to the final value. Alternatively a scheme employing a ring oscillator incrementing the binary counter can also be used. In such schemes the complexity is moved from a PLL design to a constant-gain ring oscillator challenge.

An important aspect within Flash TDCs is that their time resolution is still limited to a single gate delay of the CMOS process.

4. Vernier TDC – The Vernier TDC, compared to Flash TDCs aims to improve the converter resolution beyond the gate delay limit. A principle diagram of a classic Vernier architecture is shown below:

Basic Vernier TDC core architecture

Two sets of delay chains with a phase difference are used, where the stop signal delays use a slightly smaller time delay. This causes the stop signal to slightly catch-up the start signal as it propagates through the delay line. The time resolution of Vernier TDCs is practically determined by the time difference between the delay lines $t_{res}=\tau_{1}-\tau_{2}$. If no gain calibration of the digitized value is intended, the choice of time delay and number of delay stages in Vernier TDCs should again be carefully considered. The chosen delay for the start pulse divided by the delay time difference should equal to the number of used delay stages, which to retain the binary coding, whould be $2^{N}$.

$$\frac{t_{ds}}{t_{d}}=N$$

The Vernier TDC principle is inspired by the secondary scale in calipers and micrometers for fine resolution measurements. It was invented by the French mathematician Pierre Vernier [2].

5. Time-to-Voltage Converter + ADC – the TVC architecture converts time interval into votlage. It is difficult to achieve a high time resolution with such schemes as traditionally the only reasonable way of converting time into voltage is by using a current integrator in which a capacitor is charged with constant current for the duration of the measured time interval.

Basic principle of TVC TDCs

After the time measurement is complete, a traditional ADC is used to quantize the integrated voltage on the capacitor.

These architectures could be used in applications where a mid- or large time measurement ranges are required. For practical reasons, the TVC type TDCs do not suit well as a time interpolator combined with counter based TDCs. The noise and linearity difficulties in current integrators for high speeds are setting a high lower bound for the time dynamic range of TVCs.

6. Time Differenec Amplifier based TDC – one concept for time difference amplification (TDA) in the digital domain is introduced by Abbas et al. [3]. An analog time stretcher allows for resolution improvement by amplifying the input time interval and successively converting it with a lower resolution TDC. The concept is similar to the analog voltage gain stages placed at the input of ADCs. A time amplifier based on the originally reported by Abbas et at. architecture [3] is shown below:

TDA principle and a basic TDA cell

The circuit represents a winner-takes-it-all scheme, where the gates are forced in a metastable state performing a mutual exclusion operation, the consecutive output inverters apply regenerative gain to the MUTEX element. By using two MUTEX elements linked with the start and stop signal in parallel, with a time offset and then edge-combining their outputs one can amplify the initial edge's time difference. The difference in the output voltages of a bistable element in metastability is approximately $\Delta_{V} = \theta.\Delta_{t}.e^{1/\tau}$, where $\tau$ is the intrinsic gate time constant, $\theta$ the conversion factor from time to initial voltage change at the metastable nodes and $\Delta_{t}$ is the incoming pulse time difference [4]. We can note that the output rate of change at the metastable node is exponenitally dependent. Thus, if we intentionally delay "forwards" and "backwards" in time two MUTEX elements and combine their edges, we would acquire a linear time difference amplification caused by the logical edge combination. The output time difference in the corresponding case would be equal to $\Delta_{out} = \tau . ln(Td + \Delta_{in}) - \tau . ln(Td - \Delta_{in}) $. Several different variations of the latter circuit exist, most of which base on the logarithmic voltage dependency of latches in metastable state.

All time difference amplifier TDC approaches impose challenges related to the linearity of TDAs as well as their usually limited amplification factors.

7. Successive approximation TDC – this architecture uses the binary search algorithm to resolve a time interval. A principle diagram of a 4-bit binary search TDC is shown, as presented by Miyashita et al. [5].

Successive approximation TDC architecture

Imagine that the stop pulse slightly leads the start pulse with time difference of $\Delta = 1.5\tau$. The arbiter D3 would then detect a lead and reconfigure the multiplexer delays such that they lag the start signal by $4\tau$, thus leading to a difference of $2.5\tau$. The resolved MSB in that case would be the inverted value of D3, or '0'. Further on, arbiter D2 detects a lead in the start pulse over the stop by $2.5\tau$, in which case it would reconfigure the multiplexer delays in stage two to lag the stop signal by $2\tau$. The MSB-1 value is the inverted value of D2 arbiter which would thus be '1'. The MSB-2 value would be equal to 0, respectively, as the stop signal now leads with $0.5\tau$. Finally, arbiter D0 dechipers a '1' due to the start signal now leading with $0.5\tau$.

This topology effectively utilizes a time-domain SAR scheme and has a time resolution of $\tau$. Open-loop binary search schemes as the propsed here [5] require good matching between the delay elements, which typically suffer from low PSRR. Nevertheless, this SAR scheme is relatively new to the TDC world and might provide us with promising food for future research. Compared to the Flash TDC, the SAR scheme offers a $2^{N}-N$ reduction of strobe latches.

8. Stochastic TDC – this family of TDCs uses the same core principle of operation as Flash TDCs, however, it is formed redundantly. Here is a sketch:

Principle diagram of a Stochastic TDC architecture

Because the Flash latches have an intrinsic threshold mismatch, they practically introduce a natural Gaussian dither into the resolution of each Flash bit. If we read out the values of all latches and average their values we can extract digitized values with sub quantized step resolution. In order take advantage of the oversampling, however, Flash TDCs require a large set of latches and delay elements, which comes at the cost of power. They are traditionally reserveed for DLL-type applications and often require a digital back-end calibration [6].

References:

[1] Ronald Nutt, Digital Time Intervalometer, Review of Scientific Instruments, 39, 1342-1345 (1968)

[2] Alistair Kwan, Vernier scales and other early devices for precise measurement, American Journal of Physics, 79, 368-373 (2011)

[3] A. M. Abas, A. Bystrov, D. J. Kinniment, O. V. Maevsky, G. Russell and A. V. Yakovlev, Time difference amplifier, in Electronics Letters, vol. 38, no. 23, pp. 1437-1438, 7 Nov 2002

[4] D. J. Kinniment, A. Bystrov and A. V. Yakovlev, Synchronization circuit performance, in IEEE Journal of Solid-State Circuits, vol. 37, no. 2, pp. 202-209, Feb. 2002

[5] D. Miyashita et al., An LDPC Decoder With Time-Domain Analog and Digital Mixed-Signal Processing, in IEEE Journal of Solid-State Circuits, vol. 49, no. 1, pp. 73-83, Jan. 2014

[6] A. Samarah and A. C. Carusone, A Digital Phase-Locked Loop With Calibrated Coarse and Stochastic Fine TDC, in IEEE Journal of Solid-State Circuits, vol. 48, no. 8, pp. 1829-1841, Aug. 2013

Date:Thu Jul 21 17:24:54 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



Relative luminance calculation in a bitmap

Luminance is a measure of the light intensity per unit area of light travelling at a given direction. In the case when light travels towards the lens of a camera, we can measure and calculate the so called log-average (or relative) luminance by finding the geometric mean of the luminance values of all pixels. If we deal with black and white images the luminance value is the actual pixel value. In a color image however, the relative luminance is a value modulated by a weighted average of the values of the color filters.

An accurate enough empirical formula that describes the relative luminance of a pixel in three colorimetric spaces (XYZ) such as the RGB standard is: $$ Y_{RGB} = 0.2126 R + 0.7152 G + 0.0722 B $$ Color spaces such as the RGB (the case of an uncompressed bitmap) that use the ITU-R BT.709 primaries, the relative luminance can be calculated with enough precision from these linear RGB components. For the YUV color space, we can still calculate the relative luminance from RGB as: $$ Y_{YUV} = 0.299 R + 0.587 G + 0.114 B $$

I am sharing a tiny C snippet that computes the relative luminance in a given bitmap image. It has two primary functions:

BITMAPINFOHEADER() — The bitmap pixel extraction algorithm reads the bitmap file image header, omits the meta/additional (compression) field and uses the header to calculate the exact address/location where the bitmap image data starts. Functions from the stdio library have been used for bitmap file input and fseek to extract the bitmap header information location.

ANALYZE() — The analyze function is reading the bitmap image data, with the help of a pointed out location from the previous BITMAPINFOHEADER() function and dumps the data into a one-dimensional char array.

MAIN() — main uses the created one-dimensional array from the analyze function to calculate the luminance for each pixel using the formula described earlier. A summation of the total luminance of all pixels is done and a calculation of the total number of pixels. These are then used for the final averaging function of the pre-computed weighted values: $Y_{avg}=Y_{tot}/N_{pix}$. A standard function from stdio (fprintf) is used for the final pixel log generation. The log file is in a *.csv style format with additional information of the color values of each pixel of the bitmap image. The end of the *.csv log file contains a value of the relative luminance value of the input image.

It should compile on pretty much any vanilla gcc.

Date:Sun Jul 3 11:40:29 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



The new age of research

a vision modulated by feedback from the surrounding world of research

The more you read about Bell Labs, the more you have the feeling that such places have vanished nowadays. But the world keeps rolling out new ideas, so where could the new nests of innovation be?

Between its formal foundation in 1925 and its decay in the 1990s, Bell Labs has been the best-funded and most successful corporate research laboratory the world has ever seen. At its heyday, Bell Labs was a premier facility of its type which led to the discovery and development of a vast range of revolutionary technologies. Being funded by one of the strongest co-governmental subsidiaries — American Telephone and Telegraph Company (AT&T) — Bell Labs was primarily involved in research towards improvement of telephone services and communication. Some of these developments found use not only in the communications, but also other fields of science. Although it is pointless to list all, some notable examples for technologies which found "dual purpose" are: the MASERs – amplifiers for long-distance links; Transistors – a generic information technology component; LASERs – originating from long-distance fiber optic links and has many other purposes; Radio telescopes (cosmic microwave background radiation) – noise in sattelite communications; Foundation of the theory of information – all fields of science; TDMA, CDMA, Quadrature AM, all basic foundation of comm theory; Laser cooling; CCDs – sprung out from their attempts to create a solid-state memory; Unix, C, C++ – all used in information processing nowadays; and many many more...

Bell Labs was a nest which developed brilliant minds, sustaining innovation traditions for over 60 years. Unfortunately, the state of the labs now is not what is used to be and currently, if not fully defunct, it crawls at a much slower pace. There are many theories and speculations as to why Bell Labs got defunct, but instead of digging into history demistifying what went wrong, let's focus our lens into the new age.

Heads up! Such nests are still around, it's just most aren't as big and prolific as they used to be in those days. There have been a couple of shifts in the research world, mainly being that after the end of the cold war primary research is no longer a tax write-off which reduced the general monetary stimulus of such institutions. Many of those big conglomerates have since shut down or at their best split up, so they work only on areas where their division makes money. It is hard for us to admit it, but a large portion of modern innovations including primary research happens in start-ups. In big companies you will still find intrapreneurs and individual hooligans who do what they want, but it's harder to find someone who will pay you to lurk around and do blue sky R&D all day. Unfortunately every now and then, closing R&D groups lets CEOs cut a large expense, raise the stock price and get the hell out before the company sinks because it has no R&D.

There are still quite a few governmentally funded labs and institutes, but because there is no strict control on spendings and technology deployability, these places end up doing research which is way ahead of its time. So far, that some of their claimed purposes have not yet been even written in modern science fiction books. Take the recent growth of interest to quantum computing and photon entanglement. These days it is not rare to see quantum field researchers hoping to get more taxpayer money, so they could shoot a single photon in the sky and receive it with a satellite. Such examples of research may sound ridiculous with today's state of technology, but the truth is that Bell and any other successful institutes have always spent budget on black hole research and that's inevitable.

On the other hand, the research in startups (and I am not talking about the numerous bonanza phone app firms) faces huge obstacles before it reaches a somewhat stable state which would allow for further growth. There is a plenty of innovative ideas out there that need to be advanced in order to become a reality, but because of all that time spent hashing out business plans and "evidence-based elevator pitches" (yes that's what they call them) is totally wasted and your core idea turns out to be infeasible because you are already late or can't keep-up dealing with venture capitalists, or even worse – startup incubator funding programmes. These obstacles make primary research in this new era somewhat difficult, or rather the word to use is "different". Researchers no longer have to deal only with technical issues, but on top of that they need to cope with the dynamics of research with a destiny highly dependent on quick near-future results. That is to say, to be successful, modern-day research needs highly flexible and broadband personalities having that helicopter view on all aspects of their findings. To achieve this, many successful research groups aim to attract the best and brightest minds, with a diversity of perspectives, skills and personalities – including a mix of some who "think" and many who "do", but all with exceptional ability. Focusing on real world problems, with the ideal that the bigger the mountain to climb, the better works to some extent; only if there is sufficient funding for over a 5+ year period as anything less than that builds-up pressure. The measure of success has been the hardest to define, but can primarily be based on the long-term impact of the disruptive innovations achieved.

But not only the dynamics of this new economic world have led to the end of large research centres. Sciences themself have become more diverse and share less in common. For example, although image sensors and microprocessors could still be groupped to the field of microelectronics and were invented hand-in-hand at big national laboratories, nowadays both are dramatically different and share little in common.

Consequently — with all of that said — for better or worse, it is likely that most of us would one day end-up working in a small group, isolated from our neighbours. Dreaming of huge research facilities never hurts, but researching what else is there would not hurt you either.

Date:Sun Jul 3 11:40:29 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS