The TransistoriZed logo should be here! But... curses! Your browser does not support SVG!

homelist of postsdocsγ ~> e-about & FAQ


Moral principles in engineering

some random thoughts on engineering ethics

Ever had the feeling that all technologies surrounding us are an offspring of the hidden powers of organized crime groups?

In the early 1940s, with the number of great discoveries in 19th century ignited by the second technological revolution a new era was born - the digital revolution. Back in 1947 in his short story "Little Lost Robot", Issac Asimov predicted that technology would have advanced sufficiently by century's end that it would allow for humanity to establish a base on asteroid to perform research on space travel. With the landing of Rosetta last year, Asimov's story as usual held true again. The past 100 years have shown that the engineering disciplines, compared to all other fields of science, have probably by far the highest impact on society. The last is especially true with the emergence of the information age, which gives us one of the most influential instruments for psychological control humanity has ever had. With all that said, engineering science plays an important role in the development of our society, and to a large extent the future of mankind now lies in the hands of engineers.

Now teleporting back to early 1900 in the era of the second industrial revolution. With growing demands for easeing of mankind life by means of industrializaton, engineering established itself as a distinct profession. With the erection of a number of large civil engineering constructions, there had been a series of significant structural failures, including some spectacular bridge engineering disasters. Notably the Tacoma Narrows Bridge collapse and the Ludendorff Bridge disaster. These had a profound effect on engineers and forced the profession to establish some technical and construction practices which were driven beyond anything else but the price of human lives. As a measure, the development of ethical standards was established by an impressive number of world organizations, placing life safety to highest order. I am tempted to quote the engineer's seven canons here, published in the Code of ethics, established by the American Society of Civil Engineers in 1914.

1. Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties. 2. Engineers shall perform services only in areas of their competence. 3. Engineers shall issue public statements only in an objective and truthful manner. 4. Engineers shall act in professional matters for each employer or client as faithful agents or trustees, and shall avoid conflicts of interest. 5. Engineers shall build their professional reputation on the merit of their services and shall not compete unfairly with others. 6. Engineers shall act in such a manner as to uphold and enhance the honor, integrity, and dignity of the engineering profession and shall act with zero-tolerance for bribery, fraud, and corruption. 7. Engineers shall continue their professional development throughout their careers, and shall provide opportunities for the professional development of those engineers under their supervision.

It is here to mention that engineering science constantly faces a number practical trade-offs, which also means that material savings and cost-efficiency are an immense feature of standard consumer engineering. When engineering ethics face the model of modern corporate powers, often, highly exothermic reactions can occur and as is often seen in nature the physically strongest monkeys always win banana fights. Unfortunately strongest does not always mean wisest, in fact, the contrary is usually the case. It means that engineering practice until now needed not only to take care of life safety, but also the corporate wallet. A wallet which vigorously dislikes any kind of drainages. In retrospect, the early established engineering ethics have so far worked quite well in most cases, despite that they are not always obeyed by large enterprises, take the recent case with Volkswagen for example.

Nevertheless, the consequences of such historical disasters may seem like a piece of cake, compared to what disasters of the future information age could be if appropriate measures are not taken. Not surprisingly history tells us that mankind may again fall into the same traps, some of which have been identified as early as 600 BC with the sudden collapse of the Egyptian civilization. The most drastic and rapid social change that mankind has ever experienced actually took place some three thousand years ago! A change from primitive barbarism to a posh civilization (in the context of such a far distant era) till a complete disaster and a vanish of identity. This is all just history, however, process recurrence is something normal in nature, at least to the extent of trusting empirical laws such as Zipf's. This historical de-brief should hint that nowadays engineering science is facing new challenges, unconditionally different from any other kind of past experiences.

During the 80s, coining the term cyber warfare in his novel "Neuromancer", William Gibson successfully pinpointed the soft threats of the information age - data and identity thefts and document forgery. But this was just the beginning, as the power of the Fourth World has reached influential levels in society beyond anything else seen in the past. With all of that said, due to this rate of influence, technology control should be tightened to a whole new order of magnitude - control which now widely lies in the hands of engineering ethics. Although technology provides us with rich possibilities it does not mean we should employ all in all, just because we can. Spying nowadays is a matter of a bit flip - it is needless to say that builders should not always be putting the donkey where the master says. Instead, it is the engineer's responsibility to act professionally and in line with the principles of sustainable development, no matter what the boss says. If each of us acts with merit of honor following certain principles - the collapse of the Fourth World would never happen. If misuse can be prevented in lowest layers of hierarchy, then it'd better be the scientists and engineers; hence some amendments in our code of ethics is needed, in fact we have been in need for it for a long time. Now, I realise that such argument is going to run into immediate objections: "who are you to say what code of ethics is". And on one level, this may obviously be true. However, during my short life experience I have seen a number of engineering ethics misconceptions which are inciting me to think that something is wrong. Another less visible principle delusion which also needs addressing is the modern model of self-induced slavery; what do I mean by that? Example - medical electronics is a field of engineering with a primary aim of making people's lives better, however, nowadays I see the majority of it as monstrous money making machine. How can brain chip implants, pacemakers or disposable endoscopic cameras be sold with nearly a thousand percent (if not even much more) profit margins? Products, the majority of which were developed using public money to start with; and are primarily used in the public sector. All of the above makes what's available today practically inaccessible, or if accessible not without some form of arm-twisting. This is the second, much less obvious threat of our digital revolution requiring more emphasis in the philosophy of science. We should not fall into this trap - getting down, surrendering and rendering what we have discovered into the rulership of some large enterprises represented by single identities.

Having control over mankind's way of thinking, sensing and experiencing information has been by far the most powerful tool scientists have ever held. Such tools combined with the caprice of some corporate maggots can form an explosive substance for humanity. Engineers - always endeavour personal commitment in what you do, amplify your moral principles to highest order and never fall into someone else's plans!

Date:Sun May 15 13:54:10 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



The sorta things

A tribute to Jim Williams.

Date:Sat May 8 12:17:49 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



Applied random walks

Wiener processes and the integration of white noise during the voltage slewing process of a current integrated into a capacitor are something which I have been playing around for some time now. Here is one example showing us why hand calculations using wide sense stationary noise source assumptions are not always accurate. The AC noise analysis is a good methodology for noise contributor estimation, however, in some cases it gives us a fake noise picture and this is where transient noise analysis comes handy. Here I just want to show an example of how white noise accumulates during a capacitor charge in the time-domain. Imagine a noisy current source discharging an initially charged capacitor to a certain value:

An equivalent noisy current source discharging a capacitor. Noisy, because the transistor's SPICE models contain noise models.

The ideal switch charging the cap is controlled via a pulsed source, while the current mirror is constantly sinking current at a fixed rate. If we run multiple transient noise runs of the above schematic we may discover that the actual ramp slew rate looks more smething like this:

Beginning of random walk on a noisy ramp voltage.

Zoom into the ramp voltage transient noise runs.

The latter process comes from the fact that the power spectral density (PSD) of the noisy transistor is uniform in both left and right of the zero axes and that the voltage fluctuations accumulate in time. This process in statistics in known as a Wiener or Random walk process. It is a well studied phenomenon which is also widely used in stock market analysis and prediction. The difference here is that it is applied as a noise integration on a capacitor.

The main characteristics of interest in our case, for a generalized Wiener process are the variance and standard deviation with time. As the used PSD is uniform, the generalized Wiener process has:

mean value of zero: $z(t) - z(0) = 0$

variance of: $z(t) - z(0) = t$

and standard deviation of: $z(t) - z(0) = \sqrt{t}$

It is exactly the standard deviation that is of most interest for us. It implies that the longer we ramp, the quadratically the standard deviation increases, eventually reaching infinity. To verify this experimentally, I made a test case using external components.

Discrete 22uF capacitor with reset transistor in parallel, and a noise current switch.

When we deal with exteranl components it is often hard to measure noise levels in the order of microvolts due to a number of interference factors. Instead, I decided to use an integration capacitor of 22uF and large artificially induced noise current. The latter was injected by controlling a switched current source using a white noise source. Here is the whole setup:

Two inidividual current sources for ramp slew and current noise injection. The latter controlled by individual white and pulse voltage sources. (excuses for the pointless angle)

After a day of experiments with the sampling rate, mean currents, external interference debug etc. I finally managed to capture the effect and I've made a combined gif animation. Here is is:

Cumulative animation of the Wiener process with different ramp mean values.

The animated gif file shows 15 measurements with mean integration current swept from 60 to 200uA and a static additive white noise of 2uA (ramp of first frame has no added noise). I used a rather large integration capacitor (22u) and this is the reason why I had to boost the added current noise level to such a high value. Also note that the ramp time is about 300ms. The screenshots show a cumulative x8 curves plotted on top. There might be some additional aliasing artifacts due to the low sample rate (scope runs out of memory for higher sampling rates at this huge period of 300ms) however, I did some measurements with an analog scope and by looking at the phosphor memory I could confirm that the random walk is indeed random and chaotic.

Ramp measured using an analog scope, some chaotic behaviour is observed in the phosphor memory.

The ramp non-linearity at the high voltage end is caused by the PNP BJTs I used for capacitor reset and the switchable current source. There are some second order effects in this test, however, I think it kind-of gives an informal representation of the process. Stay tuned for more during the next couple of weeks.

Date:Sat Apr 26 11:09:41 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



Chiseling out The Chip!

This work took a while, so I thought that it deserves a few words in the blogs. During the past year or so, I have been working on an image sensor ADC testchip. It was finally taped out yesterday! What's left now is some additional gastronomical work on the tapeout cake and the drainage of a rusty bottle of champagne.

The chip in all its ugly majesty with all these redundant power pads and LVDS pairs.

The core of the testchip is a fast 12-bit column-parallel ramp ADC at 5u pitch, utilizing some special counting schemes to achieve the desired 1us ramp time at slow clock rates. Alongside, to be able to fully verify the pipelined CDS functionality and crosstalk, I've built a pixel array in line-scan configuration, some fast LVDS drivers, clock receivers, references, state machines, a few 8-bit iDACs, bond pads, ESD, and some other array-related stuff, all from scratch! The chip has a horizontal resolution of 1024 and 128 lines with RGBW filters and microlenses.

On the top-left corner there are some experimental silicon photomultipliers and SPAD diodes. These I plan to measure for fun and I promise to post the results in any of the two blogs.

Unfortunately, this chip wouldn't yield tons of publicaiton work, apart from the core ADC architecture and comparator. To test the ADC one needs a whole bunch of other fast readout blocks, which in the end are not something novel, but yet, one needs them and designing these take time. Finishing up this test system was a lot of work and I realize that it might be a bit risky and ambitious to be doing this as part of a doctorate. What if it fails to work because a state machine had an inverted signal somewhere? Or the home-made ESD and pads suffer from latch-up? Or the LVDS driver CMFB is unstable and I cannot readout data out? Or there is a current spike erasing the content of the SRAM? Or, or, or ?

We university people don't have the corporate power to tapeout metal fixes twice a month until we're there. I probably have another two or three chip runs for my whole doctorate. It may therefore be better (and more fun) to stick with small but esoteric modules, which one can verify separately and have time to analyze in detail. But hey, I'll quote a colleague here: "It is what it is, let's think how we can improve things."

Let's wish good luck with the production and see what we end up with.

Date:Sat Apr 16 13:29:13 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS



Highlights from SPIE Photonics Europe 2016

Advanced Warning! Voorrang! This post is highly non-linear and may cause headache!

Voorrang the barbarian causing headaches as seen in Antwerp, July 2015

Our group decided that we should participate in SPIE Photonics with some of our extra auxilary work. Hence, I ended up visiting the 5-day conference last week. The topics in this particular issue spanned from metamaterials and nanophotonics, via photonic crystals and devices, all the way to silicon photonics and integrated circuits, semiconductor lasers and quantum technologies. I was presenting a topic on noise in various CMOS TDI image sensor architectures, which ended-up in the photonic integrated circuits session. Weird, you might think, but the CMOS world and emerging silicon photonic technologies are very very close. What's currently missing is a good glue between both fields. Once we start jumping over this gap (maybe a timespan of 5-10 years???), we should hopefully see a boom in cheap, compact, optical spectrum analysis, microfluidic manipulation, gas sensing, molecule detection, chip-level molecule separation, multispectral imagers, photoacoustic sensing, terahertz imaging, quantum encryption and who knows what else applied to the real world. Hence, this visit was extremely useful to me. Useful, in a sense that I managed to get a live insight of the neighbouring to the electronic and imaging fields. I am offering you some of the impressions I managed to build and remember during these 5 busy days. [off-topic] Here is the place to mention that not all the presentations and presented work in this conference was brilliant. Unfortunately, I saw quite a number of striking talks which I would skip here. [/off-topic]

I kicked off the conference by listening a presentation about "Microstructured (micropyramid) IR hybrid detector design in GaAs", presented by a Slovenian group from Ljubljana. I wrote a few longer-ish comments about this talk, however I decided to comment them (though they are still kept in the source as comments). Nevertheless, here is the place to mention that I learned about black silicon - a semiconductor material formed by arrgessive reactive ion etching engraving tiny needles on the surface of silicon. This causes an increased absorption of light due to the increased Si surface. Because of the indirect bandgap nature of silicon we can achieve much higher IR sensitivity not only due to the increased area but also because of other second order effects such as reduced frensel reflection on the surface of the crystal. In the case of the presented work, frensel reflections were reduced thanks to the pyramidal shape of the detector. Hmmm, I am trying to make an analogy between acoustic anecoic chamber wall geometry and the presented by the Slovenian group photodiode design with respect to reflections...

Another intriguing work presented "An integrated SOI tunable polarization controller". The structure was formed by SiN waveguides, which fed light to three polarization rotators and three tuneable polarization phase shifters. By passing current through resistors placed only on the waveguides (forming the polarization phase shifters), they are able to precisely tune the wave phase. The light beam after each waveguide is fed to phase tuneable polarization rotators before being fed out. The group was using three polarization rotators and "heat-able" waveguides. As expected, the step response of such a (heat-controlled) system is very slow, however it is still applicable to imaging or coherent optical communication systems.

A group from IMEC presented "Density controlled nanophotonic waveguide gratings for efficient on-chip out-coupling in the near field". The origin of the problem they are trying to solve stems from holographic imaging of biological cells. Holograpgic imaging requires an accurate light point source, which lights the cells passing through a transparent microfluidic channel. At the back end of the channel a 2D camera (sensor) collects the scattered interference pattern caused by the bio-cell. Here is a crude sketch of their setup.

Holographic imaging requires an accurate point source

Apparently "standard" waveguides are not very easy to design for a wide range of wavelengths which can still act as accurate point sources. IMEC showed a methodology for creating a point source using a specifically optimized grid of blocking rods in a SiN medium to create pointsource which is accurate and works for a wide wavelenght bandwidth.

A japanese group presented "High-accuracy absolute distance measurement with a mode-resolved optical frequency comb", this talk somehow lost me completely, however, the whole point of their complex and expensive setup was to measure relatively long distances (~25-50m) with extremely high accuracy (um). Here is the place to mention that this work does not use the Time of Flight concept. Anyway, I learned some stuff about optical frequency combs and a methodology to generate a specific spectral fingerprint using light beam pulse trains in the time domain. Things in wave optics do really repeat very elegantly with RF electronics. The presenter mentioned the chain of terms - vernier light matching spectral counting, which immediately made me make the analogy with vernier based TDCs.

The next day started with a hot topics plenary session. John Dudley - the one coining the IYL initiative, gave an inspiring talk entitled "Lighting the Future of Photonics: the Legacy of the International Year of Light". He reviewed the accomplishments of the IYL and showed the truly remarkable number of educational events triggered by the outreach of IYL.

Cesar Misas from Univerisity of Jena presented "Current Challenges and Perspectives in High Power Fiber Lasers". He reviewed the challenges in acquiring high power from fiber lasers - the dominant of which seems to be laser mode instability? From the talk I managed to remember the following setup, which made their group capable of squeezing higher power out of the fiber medium.

Electronic feedback stability control of lasing mode in high-power pulsed fiber lasers

By adding an acousto-optic beam steering AOB in front of the laser medium pump, controlled by an electronic feedback with the help of a photodiode, one is able to apply corrections to be pump power such that the output is stable and does not hover between two different discrete lasing modes.

Later, at the optical sensing session, Samuel Burri from EPFL presented a SPAD line scanner directly bonded into an FPGA "LinoSPAD: a time-resolved 256x1 CMOS SPAD line sensor system featuring 64 FPGA-based TDC channels running at up to 8.5 giga-events per second". A designed SPAD in 0.15um standard CMOS was formed into an array configuration with external passive quench resistors. The total 256 spad outputs were directly bonded out and fed into an FPGA. The latter contained 64 standard counter/phase difference TDCs and a sophisticated multiplexing and calibration network. Samuel presented nice histograms showing time of arrival, column mismatch and images in the form of 2D plots. My discussions with him hinted that the whole project was an entirely one man work which is quite remarkable.

A group from the University of Trento presented "Pixel-level continuous-time incremental sigma-delta A/D converter for THz sensors". This essentially presented a compact 1st order sigma-delta ADC. However, the application is a rather fancy one. Here is how I remember them showing their THz sense side:

A THz antenna, composed of M1/M2 layers used in experimental CMOS THz sensor, also named (bowtie antenna)

Veronique Rochus from IMEC presented "Optical design of planar microlenses for enhanced pixel performance of CMOS imagers". Veronique started by introducing the main commercial microlense fabrication processing steps, identifying their pros and cons, which was right on the spot for me. Later, she presented a novel frensel microlense design for improved optical crosstalk using a three-layer fabrication steps. In addition, she presented an improved metamaterial-based frensel microlense designed by only two layers effectively utilizing the interference of light. The microlenses were tested on a standard CMOSIS CMV4000 sensor, it was also mentioned that further tests with more FSI/BSI sensors is on its way.

William Wardley from KCL displayed their work on "Large-area fabrication and characterization of UV regime metameterials manufactured using self-assembly techniques". Essentially the presentation was about a methodology of nanohole fabrication using chemical methods, avoiding expensive e-beam lithography or equivalent, while still maintaining a reasonable nanohole uniformity for the given application. Their nanohole array generation was based on anodization of aluminium and consecutive argon ion etching. They showed SEM images of arrays with separation from 60 to 200nm and a radius of about 10nm. One truly cheap method of sub-wavelength nanohole generation.

Brian Pogue from Thayer school of engineering presented "Cherenkov imaging for radiation dose and molecular sensing in vivo". A system developed for in vivo imaging and detection of cancer cells. The presented system employed a gated ICCD working in conjunction with standard CCD imagers, which are capturing alternating frames. The background lighting of the patients is also controlled by the gating controller such that it is turned on when the color CCD is taking a frame and turned off when the Intensified CCD is capturing the Cherenkov radiation. This allows for live viewing of the scene, as well as the simulataneous capture of the Cherenkov radiation. An interesting fact, the thresholds for Cherenkov radiation from water are \> 267 keV for electrons and \> 450MeV for protons.

Caeleste celebrated their 10th anniversary by organizing a workshop on the future of scientific and high-end image sensors. A few keynote speakers gave talks emphasizing on the past, present and future of these nieche imaging fields. The workshop was opened by (?) a guy from Aphesa which's name I cannot remember right now, the talk however, I can - CCDinosaurs vs CMOS image sensors. A nice review and food for thought of why the CCDinosaurs still roam the imaging planet and why the very same CCDinosaurs are big and stupid.

Karsten Sengebusch from Eureca gave a technical introduction to dithering in imaging with his presentation entitled "Prediction of the performance and image quality of CMOS image sensors". A large part of his talk looked like as if he was trying to convince the audience (or customers?) that they do not realy need a 12-bit ADC in their sensors, instead they could do with an 8, even 6-bits with added digital back-end dither. It was interesting for me to hear that Karsten is using dither with triangular distribution, which he claims provides best visual performance. Eureca are actively working on implementing real-time video dithering algorithms for their customers, which improve image quality after acquisition. A quick discussion with him about an early TDI dithering idea I had looked at, triggered a series of thoughts. Acquiring TDI lines with low quantization step and analog additive dither (or digital subtractive, or both) might not be a very bad idea after all, since the signal is tightly correlated for every line. It is just hidden within the quantized step, if we have a large set of stages, and if high speed imaging is needed (not low noise) dithering could possibly come handy in relaxing the ADC step requirements.

Benoit Dupont from Pyxalis gave an overview on the digital processors used for dual conversion gain combination in their HDR sensors, as well as timing generation. The title of the talk was "20-bit image sensors using dual processor architectures". He presented an architecture employing two RISC processors, one of them responsible for the DCG combine algorithms, tone mapping and lens shading correction, the other acted as a simple programmable sequencer for global chip-level timing generation. It was interesting to learn that Pyxalis can also sell the sequencers as an individual IP core to their customers.

Ajit Kumar from Caeleste presented a global shutter DCG pixel for HDR appications ("High Dynamic Range, shot noise limited Imagers with global shutter"). The talk was focused on a charge-domain GS pixel with a switchable FD capacitor for DCG operation. He presented some very early results and images of their pixel, which did not suffer from the typical ghosting effects as seen in rolling shutter HDR.

Jan Bosiers from Teledyne gave an overview of "Wafer-scale CMOS Imagers for medical X-ray imaging". He showed over 20+ different wafer-scale imagers designed by various groups around the world. An interesting fact - some of these large wafer-scale sensors employ additionl dosimetric pixels scattered around the photo array. These dosimetric pixels have a different structure and readout. The dosimetric information from the latter is used for calibration of the sensitivity and integration times of both the direct or indirect conversion imagers. According to Jan, there might be another 5/10+ years until organic photoconductive films reach the required resolution and performance of silicon detectors for (e.g.) dental applications.

Gert Finger from the IR instrumentation group at ESO presented "Large format and high speed sub-electron noise sensors for ground-based astronomy". A mind-blowind talk, showing a huge amount of imager projects used in the European Southern Observatory in Chile. Large ground-based telescopes require very sensitive large format photoarrays because of the focal planes of the telescopes and science instruments they use. The used materials in their project are by far not standard CMOS. Instead, the most promising used material I managed to remember was HgCdTe working at cryogenic temperatures. Their wavefront sensors also use avalanche multiplication detectors, by far not standard CMOS. He reported typical dark noise levels of 1 electron per 20 minutes - absolutely outstanding! It is worthless to mention that such noise levels are only achieved by operation in cryogenic temperatures. Gert mentioned that the newest sensors used in their Hawaii telescope arrays have (surprisingly) lower performance than the ones installed back in 2004.

Nick Nelms from the opto-electronics division at ESA gave an insight on ESA's space imaging projects with the talk entitled "High-performance image sensors in space, the shape of things to come". A highlight that cought my attention - ESA is actively trying to reduce the cost of missions (this includes the imaging field). However, on the other hand, ESA is also actively trying to use European vendors, such that the public money used in these missions are re-distributed back in Europe and not leaking out to overseas companies.

At the Photonic integrated circuits section, a group from the University of Gent presented "CMOS compatible SiN spectrometer for lab on a chip spectral sensing". The talk reviewed the a few key methodologies for spectral analysis on a chip. I managed to take a snapshot on the classification the authors made:
- arrayed waveguide grating using SiN photonic nanowires

- planar concave grating

-on-chip statuonary fourier transform spectrometry

- vibrational spectroscopy
Interestingly, they missed (or I wasn't paying attention) the crude hyperspectral image sensor approach, which uses multiple color filters with different bandwidth.

I also noted a few presentations from the molecular sensors field. Anita Rogacs from HP labs in Palo Alto presented HP's efforts "Towards sensitive, low-cost, and field-deployable spectral analysis using SERS and flat optical components". As well as "Biosensors based on Si3N4 asymmetric Mach-Zender interferometers" from Tatevik Chalyan, Univ. Trento. Both projects aim the design of a field-deployable milk analyzer, used for early detection of toxins in milk caused by rotten corn/cow food. There is a large difference in both approaches and execution, but I guess that a few PhD students at an university can hardly compete with a dinosaur like HP Labs.

One of the last presentations in the biosensors session I listened to was that of Kristelle Robin from UCL on "Highly sensitive detection using miroring resonator and nanopores". It was clear that Kristelle presented a solely own project work which aimed at designing a sensor which can detect and hold a single molecule in place using an etched nanopore in a ring resonator. Complete measurements and characterization of the chip was presented showing promising detection results. Apparently currently this is sort-of possible with the help of nanopipettes - a truly scaled version of ordinary pipettes. It was mentioned that it was a pain to work with these, which is actually the driving force towards new designs.

To conclude, after a discussion and some argues with one of my colleagues that "the current scientific generation is a "consumer" one, and does not produce anything" . I would say that SPIE Photonics is actually a very good example, showing, that a lot of innovation is happening in the applied physics fields and this will continue to grow, no matter what.

Last, I am leaving my e-notes made during the conference, a crude equivalent to a post tag system which this site still lacks.

- microstructured (micropyramid) IR hybrid detector - responsivity 4 mA/W ??? - black silicon - micropyramid structured IR detector - SOI Integrated tuneable polatrization controller - thermally controller thermal inertial effects - low-loss CMOS copper plasmonic waveguides at the nanoscale - russian girl walking in the middle of talk, then asking stupid question at the end which the author explained in the beginng, ends by saying that the author was stupid...- light waveguides ~ 100nm order are hard to manufacture - plasmonics for optical interconnec - the promise of plasmonics (scientific american) from Dec 2014 - low-loss copper waveguides - plasmonic mode dispersion in nanoscale copper wavelength - density controller nanophotonic gratings for efficient on-chip out-coupling in the near field - on-chip microfluidic cell sorter - holographic imaging camera (scattered interference pattern) - high-accuracy absolute distance measurement with a mode-resolved optical frequency comb - optical frequency comb made with pulse train in the time domain - spectral interferometry - virtually imaged phase array - vernier light matching special counts - comb distance interferometry for our 50um - frequency comb-based depth imaging assisted by a low-coherent optical interferometer - OFC optical frequency comb - RF comb - Hadamard coding - Ghost imaging - Single pixel camera - Fiber lasers - high power, using feedback frequency lock loop - coherent addition of 4 lasers combined into one beam - Optical design of planar microlenses for enhanced pixel performance of CMOS imagers - metamaterial frensel lens - THz antenna mosfet - RING imaging Cherenkoc radiation (RICH radiator) - Transforming electromagentic reality to enhance Cherenkov radiation - Large area fabrication and characterization of UV regime metamaterials manufactured using self-alignment technologies [9883-19] - nanohole arrays coupling to grating - sub-wavelength nanoholes can act as metamaterial - separation 60nm to 200nm - radius 10nm to separation - nanohole array generation by anodisation and post-etching (argon ion etching) - Plasmonic nano-antennas - single-element plasmonic antenna, high-index dielectric nanoantenna - Optoacoustic imaging Alexander Graham Bell, Cambridge Vision Labs - Brain mapping project, BRAIN project - Seems like semiconductor + photonics is reserved for electrical eng conferences - Cherenkov imaging for radiation dose and molecular sensing in vivo - Threshold for Cherenkov imaging: - >267 keV electrons - >450 MeV protons - Emission properties of Cherenkov - avg peak intensities 100nW/cm2 to 1mW/cm2 - Usage of gated ICCD (intensified CCD), they do pulsed background light and cherenkov integration when background light is off - CCDinosaurs roam the imaging planet. CCDinosaurs are big and stupid - Dose sensing pixels within large are X-ray wafer scale sensors - Astronomical measurements - 1 photon capturing every 20 seconds - was at the LAST ATOM (LE DERNIER ATOME) - CMOS compatible SiN spectrometer for lab on a chip spectral sensing - Silicon Nitride SiN photonic integrated circuits - Silicon nitride opn chip spectrometer - arrayed waveguide grating SiN photonic wire - Planar concave grating - On-chip stationary Fourier Transform Spectrometer - Vibrational Spectroscopy - Silicon nitride photonics ICs are (sort-of) compatible with CMOS fabs - On-chip fluorescence excitation and collection by focusing grating couplers - Focusing gratinc couplers - 3D light field camera with microlense array for high resolution PIV

Date:Sat Apr 09 23:05:17 CET 2016

LEAVE A COMMENT | SEE OTHER COMMENTS