Tuesday, July 31, 2018

Mantis Vision Acquires Alces Technology

Globes: Israeli 3D sensing company Mantis Vision completes the acquisition of Jackson Hole, Utah-based Alces Technology. The newspaper's sources suggest the acquisition was for about $10m. Earlier this month Mantis Vision raised $55m bringing its total investment to $84m. Alces is a depth sensing startup developing a high-resolution structured light technology.

Mantis Vision CEO Gur Bitan says, "Mantis Vision is leading innovation in 3D acquisition and sharing. Alces is a great match and we look forward to bringing their innovations to market. Alces will be rebranded Mantis Vision, Inc. and operate as an R&D center and serve as a base for commercial expansion in the US."

Alces CEO Rob Christensen says, “Our combined knowledge in hardware and optics, along with Mantis Vision’s expertise in algorithms and applications, will enable an exciting new class of products employing high-performance depth sensing.


Here is how Alces used to compare its technology with Microsoft Kinect:

Samsung Expects 10% of Smartphones to Have Triple Camera Next Year

SeekingAlpha: Samsung reports an increase in image sensor sales in Q2 2018 and a strong forecast for both its own image sensor and also the image sensors manufactured at Samsung foundry. Regarding the triple camera trends, Samsung says:

"First of all, regarding the triple camera, the triple camera offers various advantages, such as optical zoom or ultra-wide angle, also extreme low-light imaging. And that's why we're expecting more and more handsets to adopt triple cameras not only in 2018 but next year as well.

But by next year, about 10% of handsets are expected to have triple cameras. And triple camera adoption will continue to grow even after that point. Given this market outlook, actually, we've already completed quite a wide range of sensor line-up that can support the key features, such as optical zoom, ultra-wide viewing angle, bokeh and video support so that we're able to supply image sensors upon demand by customers.

At the same time, we will continue to develop higher performance image sensors that would be able to implement even more differentiating and sophisticated features based on triple camera.

To answer your second part of your question about capacity plan from the Foundry business side, given the expected increase of sensor demand going forward, we are planning additional investments to convert Line 11 from Hwaseong DRAM to image sensors with the target of going into mass production during first half of 2019. The actual size of that capacity will be flexibly managed depending on the customers' demand.
"

Monday, July 30, 2018

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018)

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018) is to be held on November 28-29 at Tokyo Institute of Technology, Japan. The invited and plenary part of the Workshop program has many interesting presentations:

  • [Plenary] Time-of-flight single-photon avalanche diode imagers by Franco Zappa (Politecnico di Milano (POLIMI), Italy)
  • [Invited] Light transport measurement using ToF camera by Yasuhiro Mukaigawa (Nara Institute of Science and Technology, Japan)
  • [Invited] A high-speed, high-sensitivity, large aperture avalanche image intensifier panel by Yasunobu Arikawa, Ryosuke Mizutani, Yuki Abe, Shohei Sakata, Jo. Nishibata, Akifumi Yogo, Mitsuo Nakai, Hiroyuki Shiraga, Hiroaki Nishimura, Shinsuke Fujioka, Ryosuke Kodama (Osaka Univ., Japan)
  • [Invited] A back-illuminated global-shutter CMOS image sensor with pixel-parallel 14b subthreshold ADC by Shin Sakai, Masaki Sakakibara, Tsukasa Miura, Hirotsugu Takahashi, Tadayuki Taura, and Yusuke Oike (Sony Semiconductor Solutions, Japan)
  • [Invited] RTS noise characterization and suppression for advanced CMOS image sensors (tentative) by Rihito Kuroda, Akinobu Teranobu, and Shigetoshi Sugawa (Tohoku Univ., Japan)
  • [Invited] Snapshot multispectral imaging using a filter array (tentative) by Kazuma Shinoda (Utsunomiya Univ., Japan)
  • [Invited] Multiband imaging and optical spectroscopic sensing for digital agriculture (tentative) by Takaharu Kameoka, Atsushi Hashimoto (Mie Univ., Japan), Kazuki Kobayashi (Shinshu Univ., Japan), Keiichiro Kagawa (Shizuoka Univ., Japan), Masayuki Hirafuji (UTokyo, Japan), and Jun Tanida (Osaka Univ., Japan)
  • [Invited] Humanistic intelligence system by Hoi-Jun Yoo (KAIST, Korea)
  • [Invited] Lensless fluorescence microscope by Kiyotaka Sasagawa, Ayaka Kimura, Yasumi Ohta, Makito Haruta, Toshihiko Noda, Takashi Tokuda, and Jun Ohta (Nara Institute of Science and Technology, Japan)
  • [Invited] Medical imaging with multi-tap CMOS image sensors by Keiichiro Kagawa, Keita Yasutomi, and Shoji Kawahito (Shizuoka Univ., Japan)
  • [Invited] Image processing for personalized reality by Kiyoshi Kiyokawa (Nara Institute of Science and Technology, Japan)
  • [Invited] Pixel aperture technique for 3-dimensional imaging (tentative) by Jang-Kyoo Shin, Byoung-Soo Choi, Jimin Lee (Kyungpook National Univ., Korea), Seunghyuk Chang, Jong-Ho Park, and Sang-Jin Lee (KAIST, Korea)
  • [Invited] Computational photography using programmable sensor by Hajime Nagahara, (Osaka Univ., Japan)
  • [Invited] Image sensing for human-computer interaction by Takashi Komuro (Saitama Univ., Japan)

Now, once the invited and plenary presentations are announces, IWISS2018 calls for posters:

"We are accepting approximately 20 poster papers. Submission of papers for the poster presentation starts in July, and the deadline is on October 5, 2018. Awards will be given to the selected excellent papers presented by ITE members. We encourage everyone to submit latest original work. Every participant needs registration by November 9, 2018. On-site registration is NOT accepted. Only poster session is an open session organized by ITE."

Thanks to KK for the link to the announcement!

ON Semi Renames Image Sensor Group, Reports Q2 Results

ON Semi renames Image Sensor Group to "Intelligent Sensing Group," suggesting that other businesses might be added to it in search for a revenue growth:


The company reports:

"During the second quarter, we saw strong demand for our image sensors for ADAS applications. Out traction in ADAS image sensors continues to accelerate. With a complete line of image sensors, including 1, 2, and 8 Megapixels, we are the only provider of complete range of pixel densities on a single platform for next generation ADAS and autonomous driving applications. We believe that a complete line of image sensors on a single platform provides us with significant competitive advantage, and we continue working to extend our technology lead over our competitors.

As we have indicated earlier, according to independent research firms, ON Semiconductor is the leader in image sensors for industrial applications. We continue to leverage our expertise in automotive market to address most demanding applications in industrial and machine vision markets. Both of these markets are driven by artificial intelligence and face similar challenges, such as low light conditions, high dynamic range and harsh operating environment.
"

Sunday, July 29, 2018

Cepton to Integrate its LiDAR into Koito Headlights

BusinessWire: Cepton, a developer 3D LiDAR based on stereo scanner, announces it will provide Koito with its miniaturized LiDAR solution for autonomous driving. The compact design of Cepton’s LiDAR sensors enables direct integration into a vehicle’s lighting system. Its Micro-Motion Technology (MMT) platform is said to be free of mechanical rotation and frictional wear, producing high-resolution imaging of a vehicle’s surroundings to detect objects at a distance of up to 300 meters away.

We are excited to bring advanced LiDAR technology to vehicles to improve safety and reliability,” said Jun Pei, CEO and co-founder of Cepton. “With the verification of our LiDAR technology, we hope to advance the goals of Koito, a global leader within the automotive lighting industry producing over 20 percent of headlights globally and 60 percent of Japanese OEM vehicles.

Before Cepton, Koito used to cooperate with Quanergy with the similar claims a year ago. Cepton technology is based on mechanical scanning, a step away from Quanergy optical phased array scanning.

Cepton ToF scanning solution is presented in a number of patent applications. 110a,b are the laser sources, while 160a,b are the ToF photodetectors:

Saturday, July 28, 2018

SensibleVision Disagrees with Microsoft Proposal of Facial Recognition Regulation

BusinessWire: SensibleVision, a developer of 3D face authentication solutions, criticized Microsoft President Brad Smith's call for government regulation of facial recognition technology:

Why would Smith single out this one technology for external oversight and not all biometrics methods?” asks George Brostoff, CEO and Co-Founder of SensibleVision. “In fact, unlike fingerprints or iris scans, a person's face is always in view and public. I would suggest it’s the use cases, ownership and storage of biometric data (in industry parlance “templates”) that are critical and should be considered for regulation. Partnerships between private companies and the public sector have always been key to the successful adoption of innovative technologies. We look forward to contributing to this broader discussion.

Column-Parallel ADC Archietctures Comparison

Japanese IEICE Transactions on Electronics publishes Shoji Kawahito paper "Column-Parallel ADCs for CMOS Image Sensors and Their FoM-Based Evaluations."

"The defined FoM are applied to surveyed data on CISs reported and the following conclusions are obtained:
- The performance of CISs should be evaluated with different metrics to high pixel-rate regions (∼> 1000MHz) from those to low or middle pixel-rate regions.
- The conventional FoM (commonly-used FoM) calculated by (noise) x (power) /(pixel-rate) is useful for observing entirely the trend of performance frontline of CISs.
- The FoM calculated by (noise)2 x (power) /(pixel-rate) which considers a model on thermal noise and digital system noise well explain the frontline technologies separately in low/middle and high pixel-rate regions.
- The FoM calculated by (noise) x (power)/ (intrascene dynamic range)/ (pixel-rate) well explains the effectiveness of the recently-reported techniques for extending dynamic range.
- The FoM calculated by (noise) x (power)/ (gray-scale range)/ (pixel-rate) is useful for evaluating the value of having high gray-scale resolution, and cyclic-based and deltasigma ADCs are on the frontline for high and low pixel-rate regions, respectively.
"

Friday, July 27, 2018

TowerJazz CIS Update

SeekingAlpha publishes TowerJazz Q2 2018 earnings call transcript with an update on its CIS technology progress:

"We had announced the new 25 megapixel sensor using our state-of-the-art and record smallest 2.5 micron global shutter pixels with Gpixel, a leading industrial sensor provider in China.

The product is achieving very high traction in the market with samples having been delivered to major and to customers. Another leading provider in this market, who has worked with us for many years will soon release a new global shutter sensor based on the same platform. Both of the above mentioned sensors are the first for families of sensors with different pixel count resolutions for each of those customers next generation industrial sensor offering ranging from 1 megapixel to above 100-megapixel.

We expect this global shutter with this outstanding performance based on our 65-nanometer 300- millimeter wafers to drive high volumes in 2019 and the years following. We see this as a key revenue driver from our industrial sensor customers. In parallel, e2v is ramping to production with its very successful Emerald sensor family on our 110-nanometer global shutter platform using state-of-the-art 2.8 micron pixel with best in class shutter efficiency and noise level performance. We recently released our 200-millimeter backside illumination for selected customers.

We are working with them on new products based on this technology, as well as on upgrading existing products from our front side illumination version to a BSI version, increase in the quantum efficiency of the pixels by using BSI, especially for the near IR regime within the industrial and surveillance markets, enabling our customers improve performance of their existing products. As a bridge to the next generation family of sensors in our advanced 300-millimeter platform.

The medical X-ray market, we are continually gaining momentum and are working with several market leaders on large panel dental and medical CMOS detectors based on our one dye per wafer sensor technology using our well established and high margin stitching with best in class high dynamic range pixels providing customers with extreme value creation and high yield both in 200-millimeter and 300-millimeter wafer technology.

We presently have a strong business with market leadership in this segment and expect substantial growth in 2019 on 200-millimeter with 300 millimeter initial qualifications that will drive an incremental growth over the next multiple years.

For mid to long-term accretive market growth, we are progressing well with a leading DSLR camera supplier and have as well begun a second project with this customer, using state-of-the-art stacked wafer technology on 300-millimeter wafers. For this DSLR supplier, the first front side illmination project is progressing according to plan, expecting to ramp the volume production in 2020, while the second stacked wafer based project with industry leading alignment accuracy and associated performance benefits is expected to ramp to volume production a year after.

In addition, we are progressing on two very exciting programs in the augmented and virtual reality markets, one for 3D time of flight-based sensors and one for silicon-based screens for a virtual reality, head-mount displays.
"

Thursday, July 26, 2018

Loup Ventures LiDAR Technologies Comparison

Loup Ventures publishes its analysis of LiDAR technologies and how they compete with each other on the market:


There is also a comparison of camera, LiDAR and Radar technologies of autonomous vehicles:


Another Loup Ventures article tries to answer on question "If a human can drive a car based on vision alone, why can’t a computer?"

"While we believe Tesla can develop autonomous cars that “resemble human driving” primarily driven by cameras, the goal is to create a system that far exceeds human capability. For that reason, we believe more data is better, and cars will need advanced computer perception technologies such as RADAR and LiDAR to achieve a level of driving far superior than humans. However, since cameras are the only sensor technology that can capture texture, color and contrast information, they will play a key role in reaching level 4 and 5 autonomy and in-turn represent a large market opportunity."

Wednesday, July 25, 2018

Synaptics Under-Display Fingerprint Scanner Reverse Engineering

SystemPlus Consulting publishes a reverse engineering report of Synaptics’ under-display fingerprint scanner found inside the VIVO X21 UD Smartphone:

"This scanner uses optical fingerprint technology that allows integration under the display. With a stainless steel support and two flexible printed circuit layers, the Synaptics fingerprint sensor’s dimensions are 6.46 mm x 9.09 mm, with an application specific integrated circuit (ASIC) driver in the flex module. This image sensor is also assembled with a glass substrate where filters are deposited.

The sensor has a resolution of 30,625 pixels, with a pixel density of 777ppi. The module’s light source is providing by the OLED display glasses. The fingerprint module uses a collimator layer corresponding to the layers directly deposited on the die sensor and composed of organic, metallic and silicon layers. This only allows light rays reflected at normal incidence to the collimator filter layer to pass through and reach the optical sensing elements. The sensor is connected by wire bonding to the flexible printed circuit and uses a CMOS process.
"

Sensor die
ASIC die
Optical filter deposition on glass substrate

Microsoft Proposes Government Regulation of Facial Recognition Use

Microsoft President Brad Smith writes in the company blog: "Advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives. This means the potential uses of facial recognition are myriad.

Some emerging uses are both positive and potentially even profound. But other potential applications are more sobering. Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first.

Perhaps as much as any advance, facial recognition raises a critical question: what role do we want this type of technology to play in everyday society?

This in fact is what we believe is needed today – a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.
"

Automotive News: Kyocera, Asahi Kasei, BrightWay Vision

Nikkei: Kyocera presents "Camera-LiDAR Fusion Sensor" to be to commercialize in 2022 or 2023. The sensor uses two kinds of data collected by the camera and LiDAR sensor, respectively, to improve the recognition rate of objects around the sensor. The LiDAR uses MEMS for beam scanning. The resolution of the LiDAR is 0.05°.

"When only LiDAR is used, it is difficult to detect the locations of objects," Kyocera said. "When only the camera is used, there is a possibility that shadows on the ground are mistakenly recognized as three-dimensional objects."

Kyocera Camera-LiDAR

Nikkei
: Asahi Kasei develops a technology to measure the pulse of a driver based on video shot by a NIR camera. The camera uses the fact that hemoglobin in the blood absorbs a large amount of green light. So, the brightness of the face in video changes and a pulse rate can be calculated.

It takes about 8s to measure the pulse rate of a driver, including the face authentication. It helps to check the physical conditions of a driver.


TimesOfIsrael: Gated imaging startup BrightWay Vision reports selling "hundreds of its systems to customers including car manufacturing giants Daimler AG and Continental AG as well as Tier-1 companies and Original Equipment Manufacturers (OEMs) in Western Europe and the Far East. BrightWay has also signed a contract with a Chinese firm to supply it with 10,000 units in 2019 for trucks.

While the average range of headlight vision is between 50 to 120 meters, BrightWay’s technology enables drivers to get images at least 250 meters (820 feet) ahead, and in all lighting conditions.
"

Tuesday, July 24, 2018

SiOnyx Launches Aurora Day/Night Video Camera

BusinessWire: SiOnyx announces the official launch for the SiOnyx Aurora action video camera with true day and night color imaging. Aurora is based on the SiOnyx Ultra Low Light technology that is protected by more than 40 patents and until now was only available in the highest-end night vision optics costing tens of thousands of dollars. This identical technology has now been cost-reduced for use in Aurora and other upcoming devices from SiOnyx and its partners.

SiOnyx backgrounder says: "SiOnyx XQE CMOS image sensors provide superior night vision, biometrics, eye tracking, and a natural human interface through our proprietary Black Silicon semiconductor technology. XQE image sensors deliver unprecedented performance advantages in infrared imaging, including sensitivity enhancements as high as 10x today’s sensor solutions.

XQE enhanced IR sensitivity takes advantage of the naturally occurring IR ‘nightglow’ to enable imaging under extreme (0.001 lux) conditions. XQE sensors provide high-quality daytime color as well as nighttime imaging capabilities that offer new levels of performance and threat detection.

As a result, SiOnyx’s Black Silicon platform represents a significant breakthrough in the development of smaller, lower cost, higher performance photonic devices.

The SiOnyx XQE technology is based on a proprietary laser process that creates the ultimate light trapping pixel, which is capable of increased quantum efficiency across the silicon band gap without damaging artifacts like dark current, non-uniformities, image lag or bandwidth limitations.

Compared to today’s CCD and CMOS image sensors, SiOnyx XQE CMOS sensors provide increased IR responsivity at the critical 850nm and 940nm wavelengths that are used in IR illumination. SiOnyx has more than 1,000 claims to the technology used in Black Silicon.

Surface modification of silicon enables SiOnyx to achieve the theoretical limit in light trapping, which results in extremely high absorption of both visible and infrared light.

The result is the industry’s best uncooled low light CMOS sensor that can be used in bright light (unlike standard night vision goggles) and can see and display color and display in high resolution (unlike thermal sensors).
"

Toward the Ultimate High-Speed Image Sensor

MDPI Sensors publishes a paper "Toward the Ultimate-High-Speed Image Sensor: From 10 ns to 50 ps" by Anh Quang Nguyen, Vu Truong Son Dao, Kazuhiro Shimonomura, Kohsei Takehara, and Takeharu Goji Etoh from Hanoi University of Science and Technology (Vietnam), Vietnam National University HCMC, Ritsumeikan University (Japan), and Kindai University (Japan).

"The paper summarizes the evolution of the Backside-Illuminated Multi-Collection-Gate (BSI MCG) image sensors from the proposed fundamental structure to the development of a practical ultimate-high-speed silicon image sensor. A test chip of the BSI MCG image sensor achieves the temporal resolution of 10 ns. The authors have derived the expression of the temporal resolution limit of photoelectron conversion layers. For silicon image sensors, the limit is 11.1 ps. By considering the theoretical derivation, a high-speed image sensor designed can achieve the frame rate close to the theoretical limit. However, some of the conditions conflict with performance indices other than the frame rate, such as sensitivity and crosstalk. After adjusting these trade-offs, a simple pixel model of the image sensor is designed and evaluated by simulations. The results reveal that the sensor can achieve a temporal resolution of 50 ps with the existing technology."

Monday, July 23, 2018

Andor Reveals Details of its New BSI sCMOS Sona Camera

Andor publishes a video with some details of its new BSI sCMOS Sona sensor: 4.2MP, 32mm diagonal, up to 95% QE (was 82% in non-BSI version), vacuum sealed and cooled to -45deg:



There is also a recording of webinar explaining the new camera features here.

Update: Andor publishes an official PR on its Sona camera:

AIRY3D Raises $10m in Series A Funding

PRNewswire: AIRY3D, a Montreal-based 3D vision start-up, raises $10m in an oversubscribed Series A funding round led by Intel Capital, including all Seed round investors CRCM Ventures, Nautilus Venture Partners, R7 Partners, Robert Bosch Venture Capital (RBVC), and WI Harper Group along with several angel investors. This financing will allow AIRY3D to advance its licensing roadmap for the first commercial adoptions of its DepthIQ 3D sensor platform with top-tier mobile OEMs in 2019.

"The simplicity and cost-efficiency of AIRY3D's 3D sensor technology, which does not require multiple components, helps position AIRY3D's technology as a potential enabler for many target markets", said Dave Flanagan, VP and group managing director at Intel Capital.

"AIRY3D's DepthIQ platform is a cost-effective technology that can accelerate the adoption of several new applications such as facial, gesture and object recognition, portrait modes, professional 'Bokeh' effects, and image segmentation in mobile markets and beyond," said Ingo Ramesohl, Managing Director at Robert Bosch Venture Capital GmbH. "We see tremendous opportunities in one of the largest business areas of Bosch - the automotive industry. The adoption of 3D cameras inside the cabin and outside with ADAS can drive significant improvements in safety and facilitate the industry's shift towards autonomous vehicles."

"After concluding our Seed round in March 2017, we fabricated DepthIQ 3D sensors on state-of-the-art CMOS wafers in collaboration with a leading sensor OEM. AIRY3D has since demonstrated 3D camera prototypes and numerous use cases. With the support of Intel, Bosch and our other investors, we now are formalizing partnerships with camera sensor OEMs and design-in collaborations with industry leading end customers in our strategic markets," said Dan Button, CEO of AIRY3D.

AIRY3D's DepthIQ platform can convert any single 2D imaging sensor into one that generates a 2D image and 3D depth data. It combines simple optical physics (in situ Transmissive Diffraction Mask technology) with proprietary algorithms to deliver versatile 3D sensing solutions while preserving 2D performance:


Thanks to GP for the link!

Sony Announces 48MP 0.8um Pixel Sensor

Sony announces 1/2-inch IMX586 stacked CMOS sensor for smartphones featuring 48MP, the industry’s highest pixel count. The new product uses the world-first pixel size of 0.8 μm.

The new sensor uses the Quad Bayer color filter array, where adjacent 2x2 pixels come in the same color. During low light shooting, the signals from the four adjacent pixels are added, raising the sensitivity to a level equivalent to that of 1.6 μm pixels (12MP).

12MP sensor (left) vs IMX586 (right)

Sunday, July 22, 2018

Review of Ion Implantation Technology for Image Sensors

MDPI Sensor publishes "A Review of Ion Implantation Technology for Image Sensors" by Nobukazu Teranishi, Genshu Fuse, and Michiro Sugitani from Shizuoka and Hyogo Universities and Sumitomo.

"Image sensors are so sensitive to metal contamination that they can detect even one metal atom per pixel. To reduce the metal contamination, the plasma shower using RF (radio frequency) plasma generation is a representative example. The electrostatic angular energy filter after the mass analyzing magnet is a highly effective method to remove energetic metal contamination. The protection layer on the silicon is needed to protect the silicon wafer against the physisorbed metals. The thickness of the protection layer should be determined by considering the knock-on depth. The damage by ion implantation also causes blemishes. It becomes larger in the following conditions if the other conditions are the same; a. higher energy; b. larger dose; c. smaller beam size (higher beam current density); d. longer ion beam irradiation time; e. larger ion mass. To reduce channeling, the most effective method is to choose proper tilt and twist angles. For P+ pinning layer formation, the low-energy B+ implantation method might have less metal contamination and damage, compared with the BF2+ method."

Friday, July 20, 2018

e2v on CCD vs CMOS Sensors

AZO Materials publishes Teledyne e2v article "The Development of CMOS Image Sensors" with a table comparing CCD and CMOS sensors. Although I do not agree with some of the statements in the table, here it is:

CharacteristicCCDCMOS
Signal from pixelElectron packetVoltage
Signal from chipAnalog VoltageBits (digital)
Readout noiselowLower at equivalent frame rate
Fill factorHighModerate or low
Photo-ResponseModerate to highModerate to high
SensitivityHighHigher
Dynamic RangeHighModerate to high
UniformityHighSlightly Lower
Power consumptionModerate to highLow to moderate
ShutteringFast, efficientFast, efficient
SpeedModerate to HighHigher
WindowingLimitedMultiple
Anti-bloomingHigh to noneHigh, always
Image ArtefactSmearing, charge transfer inefficiencyFPN, Motion (ERS), PLS
Biasing and ClockingMultiple, higher voltageSingle, low-voltage
System ComplexityHighLow
Sensor ComplexityLowHigh
Relative R&D costLowerLower or Higher depending on series

Valeo XtraVue Demos

Valeo publishes video showing use cases for its XtraVue system based on a camera, laser scanner, and vehicle-to-vehicle networking:



And another somewhat dated video has been re-posted on Valeo channel:



Thursday, July 19, 2018

Andor Teases BSI sCMOS Sensors

Andor publishes a teaser for its oncoming Sona BSI sCMOS sensor-based cameras calling them "the world's most sensitive," to be officially unveiled on July 24, 2018: