Explained: Neural Networks
April 17, 2017 | MITEstimated reading time: 7 minutes
By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.
But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.
In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.
The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.
Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.
Under the hood
The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.
The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.
There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.
Page 2 of 2Suggested Items
New Database of Materials Accelerates Electronics Innovation
05/05/2025 | ACN NewswireIn a collaboration between Murata Manufacturing Co., Ltd., and the National Institute for Materials Science (NIMS), researchers have built a comprehensive new database of dielectric material properties curated from thousands of scientific papers.
New Database of Materials Accelerates Electronics Innovation
05/02/2025 | ACN NewswireIn a collaboration between Murata Manufacturing Co., Ltd., and the National Institute for Materials Science (NIMS), researchers have built a comprehensive new database of dielectric material properties curated from thousands of scientific papers.
IT Distribution Records Strong Revenue Growth in Q1 Fueled by Personal Computing Purchases Amidst Tariff Uncertainty
05/02/2025 | IDCSales through distribution in North America posted a second consecutive quarter of growth in the first quarter of 2025. Distributor Revenues came in at $19.9B which is a 7.6% increase year-over-year, according to the International Data Corporation (IDC) North America Distribution Track e r (NADT).
INEMI Smart Manufacturing Tech Topic Series: Enhancing Yield and Quality with Explainable AI
05/02/2025 | iNEMIIn semiconductor manufacturing, the ability to analyze vast amounts of high-dimensional data is critical for ensuring product quality and optimizing wafer yield.
Nolan's Notes: The Next Killer App in Component Manufacturing
05/02/2025 | Nolan Johnson -- Column: Nolan's NotesFor quite a while, I’ve been wondering what the next “killer app” will be in electronics manufacturing and why it has been so long since the last disruptive change in EMS. I believe the answer lies in artificial intelligence, which has exploded as the next disruptor.