Bringing Human-Like Reasoning to Driverless Car Navigation
May 23, 2019 | MITEstimated reading time: 5 minutes

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.
Caption: To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments. Image: Chelsea Turner
Human drivers are exceptionally good at navigating roads they haven’t driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps — usually generated by 3-D scans — which are computationally intensive to generate and process on the fly.
In a paper being presented at this week’s International Conference on Robotics and Automation, MIT researchers describe an autonomous control system that “learns” the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.
Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car’s course.
To train the system initially, a human operator controlled an automated Toyota Prius — equipped with several cameras and a basic GPS navigation system — to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.
“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”
“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”
Joining Rus and Amini on the paper are Guy Rosman, a researcher at the Toyota Research Institute, and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.
Point-to-Point Navigation
Traditional navigation systems process data from sensors through multiple modules customized for tasks such as localization, mapping, object detection, motion planning, and steering control. For years, Rus’s group has been developing “end-to-end” navigation systems, which process inputted sensory data and output steering commands, without a need for any specialized modules.
Until now, however, these models were strictly designed to safely follow the road, without any real destination in mind. In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. To do so, the researchers trained their system to predict a full probability distribution over all possible steering commands at any given instant while driving.
The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.
“Initially, at a T-shaped intersection, there are many different directions the car could turn,” Rus says. “The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.”
What Does the Map Say?
In testing, the researchers input the system with a map with a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. For instance, it identifies a distant stop sign or line breaks on the side of the road as signs of an upcoming intersection. At each moment, it uses its predicted probability distribution of steering commands to choose the most likely one to follow its route.
Importantly, the researchers say, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers’ system, however, captures the entire world using just 40 gigabytes of data.
During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road. And it ensures the car stays on the safest path if it’s being fed contradictory input information: If, say, the car is cruising on a straight road with no turns, and the GPS indicates the car must turn right, the car will know to keep driving straight or to stop.
“In the real world, sensors do fail,” Amini says. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”
Testimonial
"We’re proud to call I-Connect007 a trusted partner. Their innovative approach and industry insight made our podcast collaboration a success by connecting us with the right audience and delivering real results."
Julia McCaffrey - NCAB GroupSuggested Items
EV Group Achieves Breakthrough in Hybrid Bonding Overlay Control for Chiplet Integration
09/12/2025 | EV GroupEV Group (EVG), a leading provider of innovative process solutions and expertise serving leading-edge and future semiconductor designs and chip integration schemes, today unveiled the EVG®40 D2W—the first dedicated die-to-wafer overlay metrology platform to deliver 100 percent die overlay measurement on 300-mm wafers at high precision and speeds needed for production environments. With up to 15X higher throughput than EVG’s industry benchmark EVG®40 NT2 system designed for hybrid wafer bonding metrology, the new EVG40 D2W enables chipmakers to verify die placement accuracy and take rapid corrective action, improving process control and yield in high-volume manufacturing (HVM).
AV Switchblade 600 Loitering Munition System Achieves Pivotal Milestone with First-Ever Air Launch from MQ-9A
09/12/2025 | BUSINESS WIREAeroVironment, Inc. (AV) a global leader in intelligent, multi-domain autonomous systems, announced its Switchblade 600 loitering munition system (LMS) has achieved a significant milestone with its first-ever air launch from an MQ-9A Reaper Unmanned Aircraft System (UAS).
United Electronics Corporation Unveils Revolutionary CIMS Galaxy 30 Automated Optical Inspection System
09/11/2025 | United Electronics CorporationUnited Electronics Corporation (UEC) today announced the launch of its new groundbreaking CIMS Galaxy 30 Automated Optical Inspection (AOI) machine, setting a new industry standard for precision electronics manufacturing quality control. The Galaxy 30, developed and manufactured by CIMS, represents a significant leap forward in inspection technology, delivering exceptional speed improvements and introducing cutting-edge artificial intelligence capabilities.
IPS, SEL Raise the Bar for ENIG Automation in North America
09/11/2025 | Mike Brask, IPSIPS has installed a state-of-the-art automated ENIG plating line at Schweitzer Engineering Laboratories’ PCB facility in Moscow, Idaho. The 81-foot, fully enclosed line sets a new standard for automation, safety, and efficiency in North American PCB manufacturing and represents one of the largest fully enclosed final finish lines in operation.
Smart Automation: Odd-form Assembly—Dedicated Insertion Equipment Matters
09/09/2025 | Josh Casper -- Column: Smart AutomationLarge, irregular, or mechanically unique parts, often referred to as odd-form components, have never truly disappeared from electronics manufacturing. While many in the industry have been pursuing miniaturization, faster placement speeds, and higher-density PCBs, certain market sectors are moving in the opposite direction.