New Technology Using Radar Can Identify Materials and Objects in Real Time
October 11, 2016 | University of St AndrewsEstimated reading time: 1 minute

A revolutionary piece of technology, created by researchers at the University of St Andrews, can detect what an object is by placing it on a small radar sensor.
The device, called RadarCat (Radar Categorisation for Input and Interaction), can be trained to recognise different objects and materials, from a drinking glass to a computer keyboard, and can even identify individual body parts.
The system, which employs a radar signal, has a range of potential applications, from helping blind people identify the different contents of two identical bottles, to automatic drinks refills in restaurants, replacing bar codes at checkout, automatic waste-sorting or even foreign language learning.
Designed by computer scientists at the St Andrews Computer Human Interaction (SACHI) research group, the sensor was originally provided by Google ATAP (Advanced Technology and Projects) as part of their Project Soli alpha developer kit program. The radar-based sensor was developed to sense micro and subtle motion of human fingers, but the team at St Andrews discovered it could be used for much more.
Professor Aaron Quigley, Chair of Human Computer Interaction at the University, explained, “The Soli miniature radar opens up a wide-range of new forms of touchless interaction. Once Soli is deployed in products, our RadarCat solution can revolutionise how people interact with a computer, using everyday objects that can be found in the office or home, for new applications and novel types of interaction.”
The system could be used in conjunction with a mobile phone, for example it could be trained to open a recipe app when you hold a phone to your stomach, or change its settings when operating with a gloved hand.
A team of undergraduates and postgraduate students at the University’s School of Computer Science was selected to show the project to Google in Mountain View in the United States earlier this year. A snippet of the video was also shown on stage during the Google’s annual conference (I/O).
Professor Quigley continued, “Our future work will explore object and wearable interaction, new features and fewer sample points to explore the limits of object discrimination.
“Beyond human computer interaction, we can also envisage a wide range of potential applications ranging from navigation and world knowledge to industrial or laboratory process control”.
Suggested Items
Stocks Tumble as Nvidia Warns of Major Hit From U.S.-China Export Curbs
04/17/2025 | I-Connect007 Editorial TeamU.S. stocks slid sharply Wednesday after Nvidia warned that new U.S. export restrictions on chips to China could slash billions from its revenue, deepening investor anxiety over the broader economic fallout of President Donald Trump’s ongoing trade war.
Samsung and Google Cloud Expand Partnership
04/09/2025 | PRNewswireSamsung Electronics Co., Ltd and Google Cloud today announced an expanded partnership to bring Google Cloud's generative AI technology to Ballie, a new home AI companion robot from Samsung.
Insulectro Technology Village to Feature 35 Powerchats at IPC APEX EXPO 2025
03/11/2025 | InsulectroInsulectro, the largest distributor of materials for use in the manufacture of PCBs and printed electronics, will present its popular and successful 13.5-minute PowerChats™ during this year’s IPC APEX EXPO at the Anaheim Convention Center, March 18-20, 2025.
Drip by Drip: Semiconductor Water Management Innovations
03/05/2025 | IDTechExNot only does semiconductor manufacturing require large volumes of energy, chemicals, and silicon wafers, it also requires vast volumes of water. IDTechEx’s latest report, “Sustainable Electronics and Semiconductor Manufacturing 2025-2035: Players, Markets, Forecasts”, forecasts water usage across semiconductor manufacturing to double by 2035, as demand for integrated circuits continues to rise.
Pusan National University Develops One-Step 3D Microelectrode Technology for Neural Interfaces
02/28/2025 | PRNewswireNeural interfaces are crucial in restoring and enhancing impaired neural functions, but current technologies struggle to achieve close contact with soft and curved neural tissues. Researchers at Pusan National University have introduced an innovative method—microelectrothermoforming (μETF)—to create flexible neural interfaces with microscopic three-dimensional (3D) structures.