Smarter Experiments for Faster Materials Discovery
August 30, 2019 | Brookhaven National LaboratoryEstimated reading time: 7 minutes

A team of scientists from the U.S. Department of Energy’s Brookhaven National Laboratory and Lawrence Berkeley National Laboratory designed, created, and successfully tested a new algorithm to make smarter scientific measurement decisions. The algorithm, a form of artificial intelligence (AI), can make autonomous decisions to define and perform the next step of an experiment. The team described the capabilities and flexibility of their new measurement tool in a paper published on August 14, 2019 in Nature Scientific Reports.
From Galileo and Newton to the recent discovery of gravitational waves, performing scientific experiments to understand the world around us has been the driving force of our technological advancement for hundreds of years. Improving the way researchers do their experiments can have tremendous impact on how quickly those experiments yield applicable results for new technologies.
Over the last decades, researchers have sped up their experiments through automation and an ever-growing assortment of fast measurement tools. However, some of the most interesting and important scientific challenges—such as creating improved battery materials for energy storage or new quantum materials for new types of computers—still require very demanding and time-consuming experiments.
By creating a new decision-making algorithm as part of a fully automated experimental setup, the interdisciplinary team from two of Brookhaven’s DOE Office of Science user facilities—the Center for Functional Nanomaterials (CFN) and the National Synchrotron Light Source II (NSLS-II)—and Berkeley Lab’s Center for Advanced Mathematics for Energy Research Applications (CAMERA) offers the possibility to study these challenges in a more efficient fashion.
The challenge of complexity
The goal of many experiments is to gain knowledge about the material that is studied, and scientists have a well-tested way to do this: They take a sample of the material and measure how it reacts to changes in its environment.
A standard approach for scientists at user facilities like NSLS-II and CFN is to manually scan through the measurements from a given experiment to determine the next area where they might want to run an experiment. But access to these facilities’ high-end materials-characterization tools is limited, so measurement time is precious. A research team might only have a few days to measure their materials, so they need to make the most out of each measurement.
"The key to achieving a minimum number of measurements and maximum quality of the resulting model is to go where uncertainties are large,” said Marcus Noack, a postdoctoral scholar at CAMERA and lead author of the study. “Performing measurements there will most effectively reduce the overall model uncertainty.”
As Kevin Yager, a co-author and CFN scientist, pointed out, “The final goal is not only to take data faster but also to improve the quality of the data we collect. I think of it as experimentalists switching from micromanaging their experiment to managing at a higher level. Instead of having to decide where to measure next on the sample, the scientists can instead think about the big picture, which is ultimately what we as scientists are trying to do.”
“This new approach is an applied example of artificial intelligence,” said co-author Masafumi Fukuto, a scientist at NSLS-II. “The decision-making algorithm is replacing the intuition of the human experimenter and can scan through the data and make smart decisions about how the experiment should proceed.”
More information for less?
In practice, before starting an experiment, the scientists define a set of goals they want to get out of the measurement. With these goals set, the algorithm looks at the previously measured data while the experiment is ongoing to determine the next measurement. On its search for the best next measurement, the algorithm creates a surrogate model of the data, which is an educated guess as to how the material will behave in the next possible steps, and calculates the uncertainty—basically how confident it is in its guess—for each possible next step. Based on this, it then selects the most uncertain option to measure next. The trick here is by picking the most uncertain step to measure next, the algorithm maximizes the amount of knowledge it gains by making that measurement. The algorithm not only maximizes the information gain during the measurement, it also defines when to end the experiment by figuring out the moment when any additional measurements would not result in more knowledge.
Page 1 of 2
Suggested Items
Alphawave Semi Delivers Foundational AI Platform IP for Scale-Up and Scale-Out Networks
04/23/2025 | BUSINESS WIREAlphawave Semi, a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure, bolsters its leadership in foundational AI silicon connectivity subsystems through silicon proven chiplets and IP subsystems on advanced process nodes and package types. This is set to be showcased at the TSMC 2025 North America Technology Symposium.
Micron Announces Business Unit Reorganization to Capitalize on AI Growth Across All Market Segments
04/23/2025 | MicronMicron Technology, Inc., a leader in innovative memory and storage solutions, announced a market segment-based reorganization of its business units to capitalize on the transformative growth driven by AI, from data centers to edge devices.
RTX's Collins Aerospace Enhances Capabilities to Speed Marine Corps Decision-making in Battle
04/22/2025 | RTXCollins Aerospace, an RTX business, successfully demonstrated new technology that helps the military gather and use information from a wider range of sources at Project Convergence Capstone 5, a large-scale military exercise.
UHDI Fundamentals: UHDI Drives Unique IoT Innovation in Farming
04/22/2025 | Anaya Vardya, American Standard CircuitsThe combination of UHDI's high-bandwidth capabilities and IoT's real-time data processing can lead to more efficient, immersive, and smarter IoT systems. This convergence of two revolutionary technologies is enabling quantum advancements in some very “unconventional” applications. The typical discussions around UHDI focus on our standard electronics industry market segments like milaero, medical, consumer electronics, etc. IoT is all about machines talking to other machines, machine learning, and artificial intelligence, but again, typically applied in our PCB and assembly operations.
In-Memory Computing: Revolutionizing Data Processing for the Modern Era
04/21/2025 | Persistence Market ResearchIn a world where milliseconds matter, traditional computing architectures often struggle to keep up with the massive influx of real-time data.