Can You Feel What I’m Saying?
November 8, 2018 | Rice UniversityEstimated reading time: 4 minutes
Imagine the panic. Fire alarms blare. Smoke fills the room, and you’re left only with the sense of touch, feeling desperately along walls as you try to find the doorway.
Image Caption: A new study by researchers from Rice University’s Mechatronics and Haptic Interfaces Laboratory found users needed less than two hours of training to learn to “feel” most words that were transmitted by a haptic armband that communicates with signals comprised of squeeze, stretch and vibration. (Photo by Jeff Fitlow/Rice University)
Now imagine technology guiding you by sense of touch. Your smartwatch, alerted by the same alarms, begins “speaking” through your skin, giving directions with coded vibrations, squeezes and tugs with meanings as clear as spoken words.
That scenario could play out in the future thanks to technology under development in the laboratory of Rice mechanical engineer Marcia O’Malley, who has spent more than 15 years studying how people can use haptic sense to interact with technology — be it robots, prosthetic limbs or stroke-rehabilitation software.
“Skin covers our whole body and has many kinds of receptors in it, and we see that as an underutilized channel of information,” said O’Malley, director of the Rice Robotics Initiative and Rice’s Mechatronics and Haptic Interfaces Laboratory (MAHI).
Emergency situations like the fire scenario described above are just one example. O’Malley said there are many “other situations where you might not want to look at a screen, or you already have a lot of things displayed visually. For example, a surgeon or a pilot might find it very useful to have another channel of communication.”
With new funding from the National Science Foundation, O’Malley and Stanford University collaborator Allison Okamura will soon begin designing and testing soft, wearable devices that allow direct touch-based communications from nearby robots. The funding, which is made possible by the National Robotics Initiative, is geared toward developing new forms of communication that bypass visual clutter and noise to quickly and clearly communicate.
“Some warehouses and factories already have more robots than human workers, and technologies like self-driving cars and physically assistive devices will make human-robot interactions far more common in the near future,” said O’Malley, Rice’s Stanley C. Moore Professor of Mechanical Engineering and professor of both computer science and electrical and computer engineering.
Soft, wearable devices could be part of a uniform, like a sleeve, glove, watchband or belt. By delivering a range of haptic cues — like a hard or soft squeeze, or a stretch of the skin in a particular direction and place, O’Malley said it may be possible to build a significant “vocabulary” of sensations that carry specific meanings.
“I can see a car’s turn signal, but only if I’m looking at it,” O’Malley said. “We want technology that allows people to feel the robots the around them and to clearly understand what those robots are about to do and where they are about to be. Ideally, if we do this correctly, the cues will be easy to learn and intuitive.”
For example, in a study presented this month at the International Symposium on Wearable Computers (ISWC) in Singapore, MAHI graduate student Nathan Dunkelberger showed that users needed less than two hours of training to learn to “feel” most words that were transmitted by a haptic armband. The MAHI-developed “multi-sensory interface of stretch, squeeze and integrated vibrotactile elements,” or MISSIVE, consists of two bands that fit around the upper arm. One of these can gently squeeze, like a blood-pressure cuff, and can also slightly stretch or tug the skin in one direction. The second band has vibrotactile motors — the same vibrating alarms used in most cellphones — at the front, back, left and right sides of the arm.
Using these cues in combination, MAHI created a vocabulary of 23 of the most common vocal sounds for English speakers. These sounds, which are called phonemes, are used in combination to make words. For example, the words “ouch” and “chow” contain the same two phonemes, “ow” and “ch,” in different order. O’Malley said communicating with phonemes is faster than spelling words letter by letter, and subjects don’t need to know how a word is spelled, only how it’s pronounced.
Dunkelberger said English speakers use 39 phonemes, but for the proof-of-concept study, he and colleagues at MAHI used 23 of the most common. In tests, subjects were given limited training — just 1 hour, 40 minutes — which involved hearing the spoken phoneme while also feeling it displayed by MISSIVE. In later tests, subjects were asked to identify 150 spoken words consisting of two to six phonemes each. Those tested got 86 percent of the words correct.
“What this shows is that it’s possible, with a limited amount of training, to teach people a small vocabulary of words that they can recall with high accuracy,” O’Malley said. “And there are definitely things we could optimize. We could make the cues more salient. We could refine the training protocol. This was our prototype approach, and it worked pretty well.”
In the NSF project, she said the team will focus not on conveying words, but conveying non-verbal information.
“There are many potential applications for wearable haptic feedback systems to allow for communication between individuals, between individuals and robots or between individuals and virtual agents like Google maps,” O’Malley said. “Imagine a smartwatch that can convey a whole language of cues to you directly, and privately, so that you don’t have to look at your screen at all!”
Testimonial
"Advertising in PCB007 Magazine has been a great way to showcase our bare board testers to the right audience. The I-Connect007 team makes the process smooth and professional. We’re proud to be featured in such a trusted publication."
Klaus Koziol - atgSuggested Items
BAE Contract Agreed with the Republic of Türkiye for Typhoon Aircraft
10/28/2025 | BAE SystemsThe UK Government has announced a c.£5.4 billion agreement with the Republic of Türkiye for the purchase of 20 Typhoon aircraft and an associated weapons and integration package, sustaining more than 20,000 highly skilled jobs across the UK supply chain.
How PCBA Excellence Transforms High-mix Operations
09/22/2025 | Chintan Sanghani, Electronics Center for ExcellenceWith over 30 years of manufacturing excellence, our organization has built deep expertise in PCBA contract manufacturing for downhole oilfield tools. Through years of focused operational leadership in this demanding sector, we've learned that in high-mix, low-volume (HMLV) environments, traditional manufacturing approaches can create more bottlenecks than breakthroughs.
Advint Incorporated Brings Artificial Intelligence to Electroplating Training
09/11/2025 | Advint IncorporatedAdvint Incorporated is introducing a new dimension to its electroplating training programs: the integration of Artificial Intelligence (AI). This initiative reflects the company’s commitment to providing PCB fabricators and manufacturers in the USA and Canada with training that is practical, forward-looking, and directly relevant to today’s production challenges.
The Signal Integrity Issue: Design007 Magazine September 2025
09/09/2025 | I-Connect007 Editorial TeamAs the saying goes, “If you don’t have signal integrity problems now, you will eventually.” This month, our experts share a variety of design techniques that can help PCB designers and design engineers achieve signal integrity.
Semiconductors Get Magnetic Boost with New Method from UCLA Researchers
07/31/2025 | UCLA NewsroomA new method for combining magnetic elements with semiconductors — which are vital materials for computers and other electronic devices — was unveiled by a research team led by the California NanoSystems Institute at UCLA.