Machine Learning Advances Human-Computer Interaction
March 14, 2017 | University of RochesterEstimated reading time: 9 minutes
Many people cite glossophobia—the fear of public speaking—as their greatest fear.
Ehsan Hoque and his colleagues at the University’s Human-Computer Interaction Lab have developed computerized speech assistants to help combat this phobia and improve speaking skills.
When we talk to someone, many of the things we communicate—facial expressions, gestures, eye contact—aren’t registered by our conscious minds. A computer, however, is adept at analyzing this information.
“I want to learn about the social rules of human communication,” says Hoque, an assistant professor of computer science and head of the Human-Computer Interaction Lab. “There is this dance going on when humans communicate: I ask a question; you nod your head and respond. We all do the dance but we don’t always understand how it works.”
In order to better understand this dance, Hoque developed computerized assistants that can sense a speaker’s body language and nuances in presentation and use those to help the speaker improve her communication skills. These systems include ROCSpeak, which analyzes word choice, volume, and body language; Rhema, a “smart glasses” interface that provides live, visual feedback on the speaker’s volume and speaking rate; and, his newest system, LISSA (“Live Interactive Social Skills Assistance”), a virtual character resembling a college-age woman who can see, listen, and respond to users in a conversation. LISSA provides live and post-session feedback about the user’s spoken and nonverbal behavior.
Hoque’s systems differ from Luo’s social media algorithms or Howard’s natural language robot models in that people may use them in their own homes. Users then have the option of sharing for research purposes the data they receive from the systems. This method allows the algorithm to continuously progress—the essence of machine learning.
“New data constantly helps the algorithm improve,” Hoque says. “This is of value for both parties because people benefit from the technology and while they’re using it, they’re helping the system get better by providing feedback.”
These systems have a wide-range of applications, including helping people to improve small talk, assisting individuals with Asperger Syndrome overcome social difficulties, helping doctors interact with patients more effectively, improving customer service training—and aiding in public speaking.
Can Robots Eventually Mimic Humans?
This is a question that has long lurked in the public imagination. The 2014 movie Ex Machina, for example, portrays a programmer who is invited to administer the Turing Test to a human-like robot named Ava. Similarly, the HBO television series Westworld depicts a Western-themed futuristic theme park populated with artificial intelligent beings that behave and emote like humans.
Although Hoque is able to model human cognition and improve the ways in which machines and humans interact, developing machines to think in the same ways as human beings or that understand and display the emotional complexity of human beings is not a goal he aims to achieve.
“I want the computer to be my companion, to help make my job easier and give me feedback,” he says. “But it should know its place.”
“If you have the option, get feedback from a real human. If that is not available, computers are there to help and give you feedback on certain aspects that humans will never be able to get at.”
Hoque cites smile intensity as an example. Through machine learning techniques, computers are able to determine the intensity of various facial expressions, whereas humans are adept at answering the question, ‘How did that smile make me feel?’
“I don’t think we want computers to be there,” Hoque says.
Page 2 of 2Suggested Items
Keysight EDA, Intel Foundry Collaborate on EMIB-T Silicon Bridge Technology for Next-Generation AI and Data Center Solutions
04/30/2025 | BUSINESS WIREKeysight Technologies, Inc. announced a collaboration with Intel Foundry to support Embedded Multi-die Interconnect Bridge-T (EMIB-T) technology, a cutting-edge innovation aimed at improving high-performance packaging solutions for artificial intelligence (AI) and data center markets in addition to the support of Intel 18A process node.
Machine Vision: MVTec Expands Deep Learning Portfolio with New Versions of its Deep Learning Tool
04/29/2025 | MVTec Software GmbHThe machine vision industry is gaining significant momentum by using deep learning, a subset of artificial intelligence, which allows for the automation of entirely new applications and improved results.
Airbus Built Forest Monitoring Satellite Biomass Successfully Launched
04/28/2025 | AirbusThe Airbus built forest monitoring satellite Biomass has been successfully launched into orbit. A European Space Agency (ESA) flagship mission, Biomass will use its revolutionary P-band synthetic aperture radar instrument to measure forest biomass to assess terrestrial carbon stocks and fluxes to enable scientists to better understand the carbon cycle and its effects on climate change.
Asia/Pacific AI Spending to Reach $175 Billion by 2028, Driven by GenAI Boom
04/25/2025 | IDCAccording to the IDC Worldwide AI and Generative AI Spending Guide, the Asia/Pacific region, including China and Japan, is experiencing unprecedented growth in Artificial intelligence (AI) and generative AI (GenAI) investments, spanning software, services, and hardware designed for AI-driven systems.
It’s Only Common Sense: Selling to Engineers
04/28/2025 | Dan Beaulieu -- Column: It's Only Common SenseSelling to engineers is an art and a science. It requires a tailored approach that respects their mindset and professional priorities, provides data, demonstrates expertise, and solves problems. Here’s how to master the art of selling to engineers.