New Models Sense Human Trust in Smart Machines
December 13, 2018 | Purdue UniversityEstimated reading time: 6 minutes
To validate their method, 581 online participants were asked to operate a driving simulation in which a computer identified road obstacles. In some scenarios, the computer correctly identified obstacles 100 percent of the time, whereas in other scenarios the computer incorrectly identified the obstacles 50 percent of the time.
“So, in some cases it would tell you there is an obstacle, so you hit the brakes and avoid an accident, but in other cases it would incorrectly tell you an obstacle exists when there was none, so you hit the breaks for no reason,” Reid said.
The testing allowed the researchers to identify psychophysiological features that are correlated to human trust in intelligent systems, and to build a trust sensor model accordingly. “We hypothesized that the trust level would be high in reliable trials and be low in faulty trials, and we validated this hypothesis using responses collected from 581 online participants,” she said.
The results validated that the method effectively induced trust and distrust in the intelligent machine.
“In order to estimate trust in real time, we require the ability to continuously extract and evaluate key psychophysiological measurements,” Jain said. “This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor.”
The EEG headset records signals over nine channels, each channel picking up different parts of the brain.
“Everyone’s brainwaves are different, so you need to make sure you are building a classifier that works for all humans.”
For autonomous systems, human trust can be classified into three categories: dispositional, situational, and learned.
Dispositional trust refers to the component of trust that is dependent on demographics such as gender and culture, which carry potential biases.
“We know there are probably nuanced differences that should be taken into consideration,” Reid said. “Women trust differently than men, for example, and trust also may be affected by differences in age and nationality.”
Situational trust may be affected by a task’s level of risk or difficulty, while learned is based on the human’s past experience with autonomous systems.
The models they developed are called classification algorithms.
“The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting,” she said.
Jain and Reid have also investigated dispositional trust to account for gender and cultural differences, as well as dynamic models able to predict how trust will change in the future based on the data.
The research is funded by the National Science Foundation. The researchers have published several papers since the work began in 2015.
Suggested Items
I-Connect007 Editor’s Choice: Five Must-Reads for the Week
06/06/2025 | Nolan Johnson, I-Connect007Maybe you’ve noticed that I’ve been taking to social media lately to about my five must-reads of the week. It’s just another way we’re sharing our curated content with you. I pay special attention to what’s happening in our industry, and I can help you know what’s most important to read about each week. Follow me (and I-Connect007) on LinkedIn to see these and other updates.
INEMI Interim Report: Interconnection Modeling and Simulation Results for Low-Temp Materials in First-Level Interconnect
05/30/2025 | iNEMIOne of the greatest challenges of integrating different types of silicon, memory, and other extended processing units (XPUs) in a single package is in attaching these various types of chips in a reliable way.
Siemens Leverages AI to Close Industry’s IC Verification Productivity Gap in New Questa One Smart Verification Solution
05/13/2025 | SiemensSiemens Digital Industries Software announced the Questa™ One smart verification software portfolio, combining connectivity, a data driven approach and scalability with AI to push the boundaries of the Integrated Circuit (IC) verification process and make engineering teams more productive.
Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems
05/08/2025 | Cadence Design SystemsAt its annual flagship user event, CadenceLIVE Silicon Valley 2025, Cadence announced a major expansion of its Cadence® Millennium™ Enterprise Platform with the introduction of the new Millennium M2000 Supercomputer featuring NVIDIA Blackwell systems, which delivers AI-accelerated simulation at unprecedented speed and scale across engineering and drug design workloads.
DARPA Selects Cerebras to Deliver Next Generation, Real-Time Compute Platform for Advanced Military and Commercial Applications
04/08/2025 | RanovusCerebras Systems, the pioneer in accelerating generative AI, has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA), for the development of a state-of-the-art high-performance computing system. The Cerebras system will combine the power of Cerebras’ wafer scale technology and Ranovus’ wafer scale co-packaged optics to deliver several orders of magnitude better compute performance at a fraction of the power draw.