Explained: Neural Networks
April 17, 2017 | MITEstimated reading time: 7 minutes
By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.
But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.
In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.
The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.
Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.
Under the hood
The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.
The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.
There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.
Page 2 of 2Suggested Items
Real Time with… IPC APEX EXPO 2024: Automation in North American PCB Shops
05/17/2024 | Real Time with...IPC APEX EXPOBenmayor Group has entered the North American market's automation landscape with their Technosystem division. In this interview, Eduardo Benmayor highlights this underinvestment and current efforts to catch up and address challenges related to strategic planning. Eduardo shares Technosystem's automation journey, from simple equipment to robotic arms, stressing the importance of machine communication and data analysis. He also offers advice on implementing automation in older facilities.
Using AI to Redefine Productivity
05/15/2024 | Nolan Johnson, SMT007 MagazinePlato Systems, a machine perception company spun out of Stanford University, employs AI and video data to analyze and optimize the human component in manufacturing. Initially focused on semiconductors, Plato Systems has expanded into EMS manufacturing. Co-founder and CEO Amin Arbabian, along with product advisor Anders Holden and head of growth Luis Vidal, discuss their approach to changeover optimization and its impact on productivity in the industry. They’ve also included customer Raj Vora in the conversation.
Real Time with… IPC APEX EXPO 2024: Manufacturing Intelligence from the Factory Floor
05/15/2024 | Real Time with...IPC APEX EXPONolan Johnson and Ranjan Chatterjee, Vice President of Smart Factory Business Units at PDF Solutions, discuss the background of Cimetrix and PDF Solutions. They explore the analytics tools provided by PDF Solutions, the merging of semiconductor and electronics manufacturing, and data handling in these industries. They also discuss different product lines, standards, packaging technologies, data usage, and integration with ERP systems
Nolan’s Notes: Coming to Terms With AI
05/07/2024 | Nolan Johnson -- Column: Nolan's NotesHow fast do things move in the world of data analytics? Here’s an example. We’ve been planning this issue on artificial intelligence for the past few months, and, in fact, I had already written this column about a month ago. Then I went to IPC APEX EXPO and upended it all. I originally had compared AI to drag racing in that (CPU) horsepower and new (data) vehicles have steadily delivered higher performance competition. That seemed pretty accurate given how generative AI models dominated the popular media with amazing results—and sometimes spectacular crashes.
RTX's Advanced Ground System for Space-based Missile Warning Now Operational
05/06/2024 | RTXAn advanced ground system for space-based missile warning developed by Raytheon, an RTX business, is now operational at the U.S. Space Force's Overhead Persistent Infrared Battlespace Awareness Center (OBAC).