"Vendors are responding by increasing sensor spectral range, integrating new capabilities into devices, and adding features such as 3D imaging. The result can be rapid growth in sensors, even in areas that are relatively stable. For instance, the worldwide market for cars is expanding at a relatively modest pace, according to Geoff Ballew, senior director of marketing in the automotive sensing division of chipmaker ON Semiconductor Corp. of Phoenix.
However, tepid growth is not the case for the automotive imaging solutions. “The number of sensors consumed and attached to those cars is growing wildly,” he said. “The image sensor chip business is growing in excess of 15 to 20 percent a year. The reason for that is cameras increasingly are adding new functionality to cars.”
Automotive sensors are expected to work from -40 to 125 °C. That interacts with the dynamic range requirement because as the operating temperature rises, so too does the dark current in the sensor. Vendors such as OmniVision must take special care within the manufacturing process to drive that dark current down, thereby expanding the operating temperature and preserving the high dynamic range.
Besides automotive, another area pushing imaging capability is the IoT. Refrigerators, washing machines, and home security systems are adding image sensors for cataloging food, recognizing people, and other tasks. But the IoT brings its own requirements, and they affect sensors, according to Nick Nam, head of emerging markets at OmniVision Technologies.
For instance, power consumption often may need to be minimized, particularly for IoT applications running on batteries.
Depth or 3D sensing is a capability being added to automotive, the IoT, and other applications. There are competing 3D imaging methods, and which is best will be different for different situations
Imaging in the shortwave IR region out to about 2 µm offers improved performance in poor visibility or at night. When combined with capabilities in the visible and UV, the resulting multispectral or hyperspectral imaging can provide important information not obtainable by visible imaging alone. While not new, the hybrid approach offers the advantage that as CMOS technology improves, so can the performance of the sensors. What’s more, the hybrid technique can be extended to other materials, allowing sensors to capture information in the mid- and thermal-IR at 5 or 10 µm, or more."
On a similar matter, most of PWC's list of 8 emerging technologies in one way or another rely on image sensing: artificial intelligence, augmented reality, blockchain, drones, IoT, robotics, virtual reality, and 3-D printing:
San Diego-based image sensor distributor AlliedSens reveals that Brookman 1.3MP BT130C and 2MP BT200C are used in Japanese color night vision cameras, such as this one.
KB ViTA kindly sent me an info about their latest LWIR camera that senses polarization:
"It all was started with the fact KB ViTA has developed a very sensitive thermal imaging module VLM640, which had a sensitivity of at least 20 mK in 8 — 12 µm band. The sensor manufacturer turned to KB ViTA and offered an engineering sample from an experimental wafer of bolometric detectors with integrated polarization filters. For KB ViTA it was honorable but, at the same time, there was no understanding of what it is expected to ultimately obtain. The technology and the very idea of seeing the own polarization of the thermal photons of objects that surround us is absolutely new and hardly anyone has experience of processing such information.
Below we will show you how the polarization in the IR spectrum looks.
There were a polarizing sensor and electronics from VLM640 camera with 20 mK sensitivity. The interesting thing about the sensor is each pixel in the group of four is covered with a polarizer. The polarization of each filter differs by 45deg. As a result, the polarization angles are 0—180deg, 45—225deg, 90—270deg, 135—315deg.
In the resulting videos, there are three images (from left to right): video from a conventional thermal imager, reconstructed polarization angles, a complex image, where the brightness is thermal radiation and the color is the polarization angle."
The example videos below show a light bulb and a painted box:
"So far, KB ViTA can say polarization shows us the object surface quality. There are assumptions (based on the results of communication with the detector manufacturer, colleagues at exhibitions and very scarce information on the Web) that the effect of evaluating the polarization of radiating and reflecting objects can be used in the following areas:
The difference between objects own radiation and reflection (for example, a warm car and a glare of the sun in a puddle or sand).
Search for camouflaged objects.
Oil stains detection on the water surface.
Defect search.
3D scanning.
Detection of warm objects on a water surface (a drowning person, for instance), distinguishing its own radiation from reflected light on water."
Science Magazine publishes a paper on wideband light sensing device "Ultrabroadband photosensitivity from visible to terahertz at room temperature" by Dong Wu, Yongchang Ma, Yingying Niu, Qiaomei Liu, Tao Dong, Sijie Zhang, Jiasen Niu, Huibin Zhou, Jian Wei, Yingxin Wang, Ziran Zhao, and Nanlin Wang from Peking University, Tianjin University of Technology, and Tsinghua University, China.
"Charge density wave (CDW) is one of the most fundamental quantum phenomena in solids. Different from ordinary metals in which only single-particle excitations exist, CDW also has collective excitations and can carry electric current in a collective fashion. Manipulating this collective condensation for applications has long been a goal in the condensed matter and materials community. We show that the CDW system of 1T-TaS2 is highly sensitive to light directly from visible down to terahertz, with current responsivities on the order of ~1 AW−1 at room temperature. Our findings open a new avenue for realizing uncooled, ultrabroadband, and sensitive photoelectronics continuously down to the terahertz spectral range."
Everything is great about the new photodetection mechanism, except the dark current, which is about 15 orders of magnitude higher than in Si photodiode:
"The Columbia Engineering researchers have created the first flat lens capable of correctly focusing a large range of wavelengths of any polarization to the same focal point without the need for any additional elements. At just 1µm thick, their revolutionary "flat" lens offers performance comparable to top-of-the-line compound lens systems.
The team’s next challenge is to improve these lenses' efficiency. The flat lenses currently are not optimal because a small fraction of the incident optical power is either reflected by the flat lens, or scattered into unwanted directions. The team is optimistic that the issue of efficiency is not fundamental, and they are busy inventing new design strategies to address the efficiency problem."
Globenewswire: Atomera licenses its Mears Silicon Technology (MST) technology to ST. MST is an additional non-silicon implant that reduces transistors variability at a given processing node. Also, 1/f noise is reduced due to the elimination of halo implant (also called pocket implant by some fabs).
Why is this relevant for image sensors? First, if Sony pixel-parallel ADC presented at ISSCC 2018 gets the market traction, the reduction of mismatch between transistors might become more important to reduce pixel-level FPN coming from multiple ADCs. Second, reduction of SF gain variations across the pixel array might reduce PRNU in the regular image sensors. Although 1/f noise reduction might not directly affect pixel transistors that do not have halo implant anyway, for the most part, other parts of the image sensor still might benefit from it.
Espros September 2018 Newsletter announces that the company has signed a contract with Hypersen Technologies (Shenzhen) Co. to supply them with a mass delivery of the TOF epc635 imagers. The epc635 sensors run Hypersen's recently launched solid-state LiDAR (HPS-3D Series). Other Espros partners in China are Benewake (Beijing) and Shanghai Data Miracle Co.
Another notable quote from the Newsletter:
"By the way, the most dominant cost drivers in TOF cameras are the receiver lens and the illumination. The TOF camera chip typically ranks as third. Hence a very sensitive TOF imager allows cost reduction due to less spending on illumination. What's more, the camera will not heat up as much, increasing lifetime and reducing power consumption."
Update: The post has been corrected according to the additional Espros explanations. Espros is supplying the ToF sensors to Hypersenm, not outsourcing the production.
Ars Technica: Following a complaint by Eric Swildens, the USPTO has rejected all but three of 56 claims in Waymo's US9368936 patent. The USPTO found that some claims replicated technology described in an earlier patent from Velodyne, while another claim was simply "impossible" and "magic."
"The patent shouldn't have been filed in the first place," Swildens said. "It's a very well written patent. However, my personal belief is that the thing that they say they invented, they didn't invent."
The 936 patent played a key role in last year's lawsuit with Uber. In December 2016, a Waymo engineer was inadvertently copied on an email from one of its suppliers to Uber, showing a LiDAR circuit design that looked almost identical to the one shown in the 936 patent:
Swildens said to Wired in 2017: "I couldn't imagine the circuit didn't exist prior to this patent." He then spent $6,000 of his own money to launch a formal challenge to 936 and won in the court.
Teledyne e2v announces that samples are now available for Emerald 8M9, the newest member of the Emerald CMOS sensor family dedicated to machine vision and Intelligent Traffic System (ITS) applications.
Emerald 8M9 features a 2.8µm global shutter pixel and provides a 8.9MP resolution in a 2/3-inch optical format. The sensor is available in two speed grades: a standard speed model (47fps @10bits) and a high speed model (107fps @10bits). The new sensor has a readout noise of 2.8e- combined with 65% QE.
Vincent Richard, Marketing Manager at Teledyne e2v, said “Emerald 8M9 is designed specifically to address the demands of machine vision, high resolution surveillance and traffic intelligence. The sensor is unmatched in the industry because of its versatile feature set. For example, real-time High Dynamic Range mode allows high resolution capture of fast moving situations from daylight to night-time with minimum artefacts and blur effects.“
Samples and demo kits are now available and mass production is planned for Q1 2019.
Wired, Verge, Axios: Mountain View, CA-based automotive LiDAR startup Aeva demos its coherent LiDAR prototype and announces $45m round A financing. The company was founded by two ex-Apple engineers in 2017. Not much is said about the technology side:
The LiDAR is able to measure Doppler shift with a few cm/s accuracy
Its range is 200m
The power consumption is less than 100W
No mechanical scanning
Also has a camera functionality
Costs in a range of few hundred dollars
“We’re focused on delivering things now,” says Aeva cofounder Mina Rezk. “This is an architecture that we put together, that we know we can manufacture.” That means no exotic materials and using components that are well established and easy to acquire."
"We propose the use of germanium-on-silicon technology for indirect time-of-flight depth sensing as well as three-dimensional imaging applications, and demonstrate a novel pixel featuring a high quantum efficiency and a large frequency bandwidth. Compared to conventional silicon pixels, our germanium-on-silicon pixels simultaneously maintain a high quantum efficiency and a high demodulation contrast deep into GHz frequency regime, which enable consistently superior depth accuracy in both indoor and outdoor scenarios. Device simulation, system performance comparison, and electrical/optical characterization of the fabricated pixels are presented. Our work paves a new path to high-performance time-of-flight sensors and imagers, as well as potential adoptions of eye-safe lasers (wavelengths > 1.4um) that fall outside of the operation window of conventional silicon pixels."
"These results might be surprising as the dark current of the Ge-on-Si pixel is set to be several orders of magnitude larger than that of the Si pixel (nearly no changes to Fig. 3(a) and 3(b) even if a lower Si pixel dark current is set). The reason lies in that, in an indirect TOF system, the dominant system noise is in fact due to the indoor/outdoor ambient light and the laser light instead of the dark current for various depth sensing and 3D imaging applications."
The company patent applications, such as US20170040362, show the proposed pixel structures:
IEDM 2018 to be held on Dec. 1-5 in San Francisco publishes a list of accepted papers with an interesting image sensor stuff:
High Performance 2.5um Global Shutter Pixel with New Designed Light-Pipe Structure Toshifumi Yokoyama, TowerJazz Panasonic Semiconductor Co.
Back-Illuminated 2.74 μm-Pixel-Pitch Global Shutter CMOS Image Sensor with Charge-Domain Memory Achieving 10k e- Saturation Signal Yoshimichi Kumagai, Sony Semiconductor
A 0.68e-rms Random-Noise 121dB Dynamic-Range Sub-pixel architecture CMOS Image Sensor with LED Flicker Mitigation Satoko Iida, Sony Semiconductor
A 24.3Me- Full Well Capacity CMOS Image Sensor with Lateral Overflow Integration Trench Capacitor for High Precision Near Infrared Absorption Imaging Maasa Murata, Tohoku University
A HDR 98dB 3.2µm Charge Domain Global Shutter CMOS Image Sensor Arnaud Tournier, STMicroelectronics
1.5µm dual conversion gain, backside illuminated image sensor using stacked pixel level connections with 13ke- full-well capacitance and 0.8e- noise Vincent Venezia, Omnivision
Through-silicon-trench in back-side-illuminated CMOS image sensors for the improvement of gate oxide long term performance Andrea Vici, La Sapienza University of Rome
High-Performance Germanium-on-Silicon Lock-in Pixels for Indirect Time-of-Flight Applications Neil Na, Artilux Inc.
High Voltage Generation Using Deep Trench Isolated Photodiodes in a Back Side Illuminated Process Filip Kaklin, The University of Edinburgh
CMOS-Integrated Single-Photon-Counting X-Ray Detector using an Amorphous-Selenium Photoconductor with 11×11-μm2 Pixels Ahmet Camlica, University of Waterloo
Sony officially unveils IMX250MZR / MYR polarization sensitive image sensors. The IMX250MZR monochromatic version is available now, while color IMX250MYR is expected to be available in December 2018. The 5.07MP sensors use 3.45µm global shutter pixel with four-directional polarizer formed on the photodiode:
"With conventional types of polarization sensors, the polarizer is attached on top of the on-chip lens layer (Fig.2), however with Sony Semiconductor Solutions’ polarization sensor the polarizer is formed on chip under the on-chip lens layer (Fig.3). A shorter distance between the polarizer and the photodiode improves the extinction ratio and the incident angle dependence. Since the polarizer is formed during the semiconductor process, form and formulation of polarizer, uniformity, mass productivity and durability are excellent compared to conventional polarization sensors. Furthermore, Sony Semiconductor Solutions’ Polarization sensor is covered with an anti-reflection layer which helps to reduce reflectance and avoids poor flare and ghost characteristics.
By creating a unique air gap structure in the polarizer, it enables excellent polarization properties and sensitivity in a broad band from visible to near infrared. It also has an advantage of excellent image quality in various light source environments by introducing the world's first anti-reflection layer to reduce flare and ghost for polarization sensor."
The company shows a number of examples where polarization imaging has advantages:
"The full well capacity of photodiodes in image sensors generally determines the exposure time, and may also affect the signal to noise ratio and/or the dynamic range of the image sensor. In some embodiments, the full well capacity of the photodiodes in the image sensor may be dynamically adjusted to allow longer exposure times, reducing blooming artifacts in captured images, and to increase the dynamic range of the image sensor. In one embodiment, the image sensor may transfer charge from the photodiode to a storage node one or more times during integration (e.g., the exposure time frame). Transferring the charge during integration may allow the full well capacity of the photodiode to be increased beyond the hardware imposed well capacity. Additionally, because the full well capacity may be varied without changing the hardware of the image sensor, the full well capacity can be dynamically varied allowing the image sensor to adjust to different lighting conditions, image capturing settings (e.g., video or still photographs), as well as allow a user to adjust the exposure time as desired without increasing blooming artifacts.
The storage node 702 may have an increased capacitance to accommodate multiple charge transfers from the photodiode 154. For example, the storage node 702 may be sufficiently large to accommodate double (or more) the capacity of the photodiode 154. This allows the storage node 702 to store charge from multiple charge transfers from the photodiode 154 as the integration time of the photodiode 154 is increased over the hardware implemented full well capacity."
Panasonic is developing a 3D ToF image sensor for AI-based advanced driver monitoring systems. The ToF image sensor has 5.6um pixel optimized for high sensitivity at 940nm wavelength to operate under bright sunlight. This sensor is AEC-Q100 Grade2 compliant and is designed to be used for next-generation in-cabin applications such as gesture-based user interfaces and driver monitoring.
The company's ToF web page also shows smartphone and industrial applications for its sensor:
Pioneer announces that from late September, the company will sequentially start shipping three types and four models of 3D-LiDAR sensors, which adopt the MEMS mirror method and differ in measurement distance: “Telescopic LiDAR”, “Medium-range LiDAR” and “Short-range LiDAR.” Pioneer is developing a high-performance, downsizing, lower price 3D-LiDAR using a MEMS mirror, aiming for mass production in the 2020s. Pioneer provided its first sample for testing September 2017 to companies in Japan and overseas.
In response to diverse customer needs, Pioneer is also developing the wobbling scanning method of “Wide-view LiDAR” in addition to the raster scanning method.
RED introduces a camera for shooting immersive 360 degree VR content featuring 16 fish eye cameras with 8K 60fps Super 35mm Helium sensor each (active area 29.9 x 15.77mm2, 3.65um pixels).