New Tech Enhances Depth-Sensing Camera Capabilities
August 11, 2015 | Carnegie Mellon UniversityEstimated reading time: 4 minutes
Depth-sensing cameras, such as Microsoft’s Kinect controller for video games, have become widely used 3-D sensors. Now, a new imaging technology invented by Carnegie Mellon University and the University of Toronto addresses a major shortcoming of these cameras: the inability to work in bright light, especially sunlight.
The key is to gather only the bits of light the camera actually needs. The researchers created a mathematical model to help program these devices so that the camera and its light source work together efficiently, eliminating extraneous light, or “noise,” that would otherwise wash out the signals needed to detect a scene’s contours.
Lamp Our Camera
A new depth-sensing camera technology developed by CMU and the University of Toronto can capture 3-D information in even brightly lit scenes; a prototype is able to sense the shape of a lit CFL bulb (above) that would create blinding glare for a conventional camera (below).
“We have a way of choosing the light rays we want to capture and only those rays,” said Srinivasa Narasimhan, CMU associate professor of robotics. “We don’t need new image-processing algorithms and we don’t need extra processing to eliminate the noise, because we don’t collect the noise. This is all done by the sensor.”
One prototype based on this model synchronizes a laser projector with a common rolling-shutter camera — the type of camera used in most smartphones — so that the camera detects light only from points being illuminated by the laser as it scans across the scene.
Lamp Regular Camera
This not only makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused light — it can capture the shape of a lightbulb that has been turned on, for instance, and see through smoke — but also makes it extremely energy efficient. This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts and sensing for robots used to explore the moon and planets. It also could be readily incorporated into most smartphones.
The researchers will present their findings today at SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles.
Depth cameras work by projecting a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3-D contours of the scene.
The problem is that these devices use compact projectors that operate at low power, so their faint patterns are washed out and undetectable when the camera captures ambient light from a scene. But as a projector scans a laser across the scene, the spots illuminated by the laser beam are brighter, if only briefly, noted Kyros Kutulakos, U of T professor of computer science.
“Even though we’re not sending a huge amount of photons, at short time scales, we’re sending a lot more energy to that spot than the energy sent by the sun,” he said. The trick is to be able to record only the light from that spot as it is illuminated, rather than try to pick out the spot from the entire bright scene.
In the prototype using a rolling-shutter camera, this is accomplished by synchronizing the projector so that as the laser scans a particular plane, the camera accepts light only from that plane. Alternatively, if other camera hardware is used, the mathematical framework developed by the team can compute energy-efficient codes that optimize the amount of energy that reaches the camera.
“We have a way of choosing the light rays we want to capture and only those rays.” — Srinivasa Narasimhan
In addition to enabling the use of Kinect-like devices to play videogames outdoors, the new approach also could be used for medical imaging, such as skin structures that otherwise would be obscured when light diffuses as it enters the skin. Likewise, the system can see through smoke despite the light scattering that usually makes it impenetrable to cameras. Manufacturers also could use the system to look for anomalies in shiny or mirrored components.
William “Red” Whittaker, University Professor of Robotics at CMU, said the system offers a number of advantages for extraterrestrial robots. Because depth cameras actively illuminate scenes, they are suitable for use in darkness, such as inside craters, he noted. In polar regions of the moon, where the sun is always at a low angle, a vision system that is able to eliminate the glare is essential.
“Low-power sensing is very important,” Whittaker said, noting that a robot’s sensors expend a relatively large amount of energy because they are always on. “Every watt matters in a space mission.”
Narasimhan said depth cameras that can operate outdoors could be useful in automotive applications, such as in maintaining spacing between self-driving cars that are “platooned” — following each other at close intervals.
In addition to Narasimhan and Kutulakos, the research team included Supreeth Achar, a CMU Ph.D. student in robotics, and Matthew O’Toole, a U of T Ph.D. computer science student. The research was supported by the National Science Foundation, the U.S. Army Research Laboratory, and the Natural Sciences and Engineering Research Council of Canada.
Testimonial
"We’re proud to call I-Connect007 a trusted partner. Their innovative approach and industry insight made our podcast collaboration a success by connecting us with the right audience and delivering real results."
Julia McCaffrey - NCAB GroupSuggested Items
MEMS & Imaging Sensors Summit to Spotlight Sensing Revolution for Europe’s Leadership
09/11/2025 | SEMIIndustry experts will gather November 19-20 at the SEMI MEMS & Imaging Sensors Summit 2025 to explore the latest breakthroughs in AI-driven MEMS and imaging optimization, AR/VR technologies, and advanced sensor solutions for critical defence applications.
Direct Imaging System Market Size to Hit $4.30B by 2032, Driven by Increasing Demand for High-Precision PCB Manufacturing
09/11/2025 | Globe NewswireAccording to the SNS Insider, “The Direct Imaging System Market size was valued at $2.21 Billion in 2024 and is projected to reach $4.30 Billion by 2032, growing at a CAGR of 8.68% during 2025-2032.”
I-Connect007’s Editor’s Choice: Five Must-Reads for the Week
07/04/2025 | Marcy LaRont, I-Connect007For our industry, we have seen several bullish market announcements over the past few weeks, including one this week by IDC on the massive growth in the global server market. We’re also closely watching global trade and nearshoring. One good example of successful nearshoring is Rehm Thermal Systems, which celebrates its 10th anniversary in Mexico and the official opening of its new building in Guadalajara.
Driving Innovation: Direct Imaging vs. Conventional Exposure
07/01/2025 | Simon Khesin -- Column: Driving InnovationMy first camera used Kodak film. I even experimented with developing photos in the bathroom, though I usually dropped the film off at a Kodak center and received the prints two weeks later, only to discover that some images were out of focus or poorly framed. Today, every smartphone contains a high-quality camera capable of producing stunning images instantly.
United Electronics Corporation Advances Manufacturing Capabilities with Schmoll MDI-ST Imaging Equipment
06/24/2025 | United Electronics CorporationUnited Electronics Corporation has successfully installed the advanced Schmoll MDI-ST (XL) imaging equipment at their advanced printed circuit board facility. This significant technology investment represents a continued commitment to delivering superior products and maintaining their position as an industry leader in precision PCB manufacturing.