New Models Sense Human Trust in Smart Machines
December 13, 2018 | Purdue UniversityEstimated reading time: 6 minutes

New “classification models” sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork.
The long-term goal of the overall field of research is to design intelligent machines capable of changing their behavior to enhance human trust in them. The new models were developed in research led by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University’s School of Mechanical Engineering.
“Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans,” Jain said. “As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.”
For example, aircraft pilots and industrial workers routinely interact with automated systems. Humans will sometimes override these intelligent machines unnecessarily if they think the system is faltering.
“It is well established that human trust is central to successful interactions between humans and machines,” Reid said.
The researchers have developed two types of “classifier-based empirical trust sensor models,” a step toward improving trust between humans and intelligent machines.
The work aligns with Purdue's Giant Leaps celebration, acknowledging the university’s global advancements made in AI, algorithms and automation as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.
The models use two techniques that provide data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns, and the second monitors changes in the electrical characteristics of the skin, providing psychophysiological “feature sets” correlated with trust.
Forty-five human subjects donned wireless EEG headsets and wore a device on one hand to measure galvanic skin response.
One of the new models, a “general trust sensor model,” uses the same set of psychophysiological features for all 45 participants. The other model is customized for each human subject, resulting in improved mean accuracy but at the expense of an increase in training time. The two models had a mean accuracy of 71.22 percent, and 78.55 percent, respectively.
Measuring Trust
Purdue University researchers are the first to use EEG measurements to estimate human trust of intelligent machines in real time. (Purdue University photo/Marshall Farthing) Download image
It is the first time EEG measurements have been used to gauge trust in real time, or without delay.
“We are using these data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event.”
Findings are detailed in a research paper appearing in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems. The journal’s special issue is titled "Trust and Influence in Intelligent Human-Machine Interaction." The paper was authored by mechanical engineering graduate student Kumar Akash; former graduate student Wan-Lin Hu, who is now a postdoctoral research associate at Stanford University; Jain and Reid.
“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship,” Jain said. “In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this.”
The issue of human trust in machines is important for the efficient operation of “human-agent collectives.”
“The future will be built around human-agent collectives that will require efficient and successful coordination and collaboration between humans and machines,” Jain said. “Say there is a swarm of robots assisting a rescue team during a natural disaster. In our work we are dealing with just one human and one machine, but ultimately we hope to scale up to teams of humans and machines.”
Algorithms have been introduced to automate various processes.
“But we still have humans there who monitor what’s going on,” Jain said. “There is usually an override feature, where if they think something isn’t right they can take back control.”
Sometimes this action isn’t warranted.
“You have situations in which humans may not understand what is happening so they don’t trust the system to do the right thing,” Reid said. “So they take back control even when they really shouldn’t.”
In some cases, for example in the case of pilots overriding the autopilot, taking back control might actually hinder safe operation of the aircraft, causing accidents.
Page 1 of 2
Suggested Items
DARPA Selects Cerebras to Deliver Next Generation, Real-Time Compute Platform for Advanced Military and Commercial Applications
04/08/2025 | RanovusCerebras Systems, the pioneer in accelerating generative AI, has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA), for the development of a state-of-the-art high-performance computing system. The Cerebras system will combine the power of Cerebras’ wafer scale technology and Ranovus’ wafer scale co-packaged optics to deliver several orders of magnitude better compute performance at a fraction of the power draw.
Altair, JetZero Join Forces to Propel Aerospace Innovation
03/26/2025 | AltairAltair, a global leader in computational intelligence, and JetZero, a company dedicated to developing the world’s first commercial blended wing airplane, have joined forces to drive next-generation aerospace innovation.
RTX's Raytheon Receives Follow-on Contract from U.S. Army for Advanced Defense Analysis Solution
03/25/2025 | RTXRaytheon, an RTX business, has been awarded a follow-on contract from the U.S. Army Futures Command, Futures and Concepts Center to continue to utilize its Rapid Campaign Analysis and Demonstration Environment, or RCADE, modeling and simulation capability.
Ansys to Integrate NVIDIA Omniverse
03/20/2025 | ANSYSAnsys announced it will offer advanced data processing and visualization capabilities, powered by integrations with NVIDIA Omniverse within select products, starting with Fluent and AVxcelerate Sensors.
Altair Releases Altair HyperWorks 2025
02/19/2025 | AltairAltair, a global leader in computational intelligence, is thrilled to announce the release of Altair® HyperWorks® 2025, a best-in-class design and simulation platform for solving the world's most complex engineering challenges.