First Programmable Memristor Computer Aims to Bring AI Processing Down from the Cloud
July 18, 2019 | Michigan State UniversityEstimated reading time: 4 minutes

The memristor array chip plugs into the custom computer chip, forming the first programmable memristor computer. The team demonstrated that it could run three standard types of machine learning algorithms. Image credit: Robert Coelius, Michigan Engineering.
ANN ARBOR—The first programmable memristor computer—not just a memristor array operated through an external computer—has been developed at the University of Michigan.
It could lead to the processing of artificial intelligence directly on small, energy-constrained devices such as smartphones and sensors. A smartphone AI processor would mean that voice commands would no longer have to be sent to the cloud for interpretation, speeding up response time.
“Everyone wants to put an AI processor on smartphones, but you don’t want your cell phone battery to drain very quickly,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Electronics.
In medical devices, the ability to run AI algorithms without the cloud would enable better security and privacy.
Why Memristors are Good for Machine Learning
The key to making this possible could be an advanced computer component called the memristor. This circuit element, an electrical resistor with a memory, has a variable resistance that can serve as a form of information storage. Because memristors store and process information in the same location, they can get around the biggest bottleneck for computing speed and power: the connection between memory and processor.
This is especially important for machine-learning algorithms that deal with lots of data to do things like identify objects in photos and videos—or predict which hospital patients are at higher risk of infection. Already, programmers prefer to run these algorithms on graphical processing units rather than a computer’s main processor, the central processing unit.
“GPUs and very customized and optimized digital circuits are considered to be about 10-100 times better than CPUs in terms of power and throughput.” Lu said. “Memristor AI processors could be another 10-100 times better.”
GPUs perform better at machine learning tasks because they have thousands of small cores for running calculations all at once, as opposed to the string of calculations waiting their turn on one of the few powerful cores in a CPU.
A memristor array takes this even further. Each memristor is able to do its own calculation, allowing thousands of operations within a core to be performed at once. In this experimental-scale computer, there were more than 5,800 memristors. A commercial design could include millions of them.
Memristor arrays are especially suited to machine learning problems. The reason for this is the way that machine learning algorithms turn data into vectors—essentially, lists of data points. In predicting a patient’s risk of infection in a hospital, for instance, this vector might list numerical representations of a patient’s risk factors.
Then, machine learning algorithms compare these “input” vectors with “feature” vectors stored in memory. These feature vectors represent certain traits of the data (such as the presence of an underlying disease). If matched, the system knows that the input data has that trait. The vectors are stored in matrices, which are like the spreadsheets of mathematics, and these matrices can be mapped directly onto the memristor arrays.
What’s more, as data is fed through the array, the bulk of the mathematical processing occurs through the natural resistances in the memristors, eliminating the need to move feature vectors in and out of the memory to perform the computations. This makes the arrays highly efficient at complicated matrix calculations. Earlier studies demonstrated the potential of memristor arrays for speeding up machine learning, but they needed external computing elements to function.
Wei Lu stands with first author Seung Hwan Lee, an electrical engineering PhD student, who holds the memristor array. Image credit: Robert Coelius, Michigan Engineering
Page 1 of 2
Testimonial
"We’re proud to call I-Connect007 a trusted partner. Their innovative approach and industry insight made our podcast collaboration a success by connecting us with the right audience and delivering real results."
Julia McCaffrey - NCAB GroupSuggested Items
Soaring Inference AI Demand Triggers Severe Nearline HDD Shortages; QLC SSD Shipments Poised for Breakout in 2026
09/16/2025 | TrendForceTrendForce’s latest investigations reveal that the massive data volumes generated by AI are straining the global infrastructure of data center storage.
Advanced Packaging-to-Board-Level Integration: Needs and Challenges
09/15/2025 | Devan Iyer and Matt Kelly, Global Electronics AssociationHPC data center markets now demand components with the highest processing and communication rates (low latencies and high bandwidth, often both simultaneously) and highest capacities with extreme requirements for advanced packaging solutions at both the component level and system level. Insatiable demands have been projected for heterogeneous compute, memory, storage, and data communications. Interconnect has become one of the most important pillars of compute for these systems.
Procense Raises $1.5M in Seed Funding to Accelerate AI-Powered Manufacturing
09/11/2025 | BUSINESS WIREProcense, a San Francisco-based industrial automation startup developing cutting-edge AI and remote sensing technologies for process manufacturers has raised $1.5 million in a seed funding round led by Kevin Mahaffey, Business Insider’s #1 seed investor of 2025 and HighSage Ventures, a Boston-based family office that primarily invests in public and private companies in the global software, internet, consumer, and financial technology sectors.
Zuken Announces E3.series 2026 Release for Accelerated Electrical Design and Enhanced Engineering Productivity
09/10/2025 | ZukenZuken reveals details of the upcoming 2026 release of E3.series, which will introduce powerful new features aimed at streamlining electrical and fluid design, enhancing multi-disciplinary collaboration, and boosting engineering productivity.
AI Infrastructure Boosts Global Semiconductor Revenue Growth to 17.6% in 2025
09/09/2025 | IDCAccording to the Worldwide Semiconduct o r Technology and Supply Chain Intelligence service from International Data Corporation (IDC), worldwide semiconductor revenue is expected to reach $800 billion in 2025, growing 17.6% year-over-year from $680 billion in 2024. This follows a strong rebound in 2024, when revenue grew by 22.4% year-over-year.