Robotics Competition Generated Groundbreaking Research
June 12, 2015 | MITEstimated reading time: 5 minutes
Last weekend was the final round of competition in the U.S. Defense Advanced Research Projects Agency’s contest to design control systems for a humanoid robot that could climb a ladder, remove debris, drive a utility vehicle, and perform several other tasks related to a hypothetical disaster. The team representing MIT finished sixth out of a field of 25.
But before the competition, the team’s leader, Russ Tedrake, an associate professor of computer science and engineering, said, “I feel as if we’ve already won, because of all the amazing research our students did” — including a paper that won the overall best-paper award at the 2014 International Conference on Humanoid Robots.
Optima primed
In control theory, control of a dynamic system — such as a robot, an airplane, or a power grid — is often treated as an optimization problem. The trick is to contrive a mathematical function whose minimum value represents a desired state of the system. Control is then a matter of finding that minimum and figuring out how to continuously nudge the system back toward it.
Optimization problems can be enormously complex, so they’re frequently used for offline analysis — for example, to determine how well much simpler control algorithms will work. But from the get-go, Tedrake decided that the MIT team’s control algorithms would solve optimization problems on the fly. That required innovation on multiple fronts.
See how the MIT team designed their robot to compete in the DARPA Robotics Challenge.
Video: CSAIL
Pressure centers
Control of an autonomous robot can be divided, roughly, between two types of algorithms: a motion planner, which determines how a robot should go about executing a task, and a controller, which sends control signals to the robot’s joints during the task’s execution.
When a bipedal robot takes a step, its foot strikes the ground at a number of different points, with experience different forces over time. A function that factors in all those forces would be difficult to optimize, but it becomes much more tractable if the forces are treated as acting on each foot at a single point.
The MIT researchers found a way to generalize that approach to more complex motions in three dimensions. So their planner also factors in contacts between the robot’s arms, and even the objects the robot is manipulating, and the surrounding environment.
Further, the planner considers the forces exerted by those contacts in six dimensions rather than three — adding rotational forces in three dimensions to the standard linear forces. It also factors in environmental constraints, such as avoiding collision with nearby objects or keeping the objects the robot is manipulating within view of its laser rangefinder.
Finally, it lumps all these factors together into one big optimization problem. So rather than planning points of contact and then calculating the resulting forces, the algorithm chooses just those points of contact that minimize displacement of the robot’s center of gravity, while still accommodating environmental constraints.
Balancing act
The lower-level control algorithm, however, can’t afford to ignore the forces acting at individual points of contact. Early on, Tedrake set the ambitious goal of a system that could evaluate information from the robot’s sensors and readjust the trajectories of its limbs 1,000 times a second, or at a rate of one kilohertz.
That sounds daunting, but as Tedrake explains, past a certain point, the high sampling rate actually becomes an advantage. One one-thousandth of a second allows so little time for circumstances to change that the imposition of new constraints usually occurs piecemeal. From one sensor reading to the next, the algorithm rarely has to meet more than one or two new constraints, which it can usually manage with just a small adjustment.
As one test of the kilohertz controller, members of the MIT team instructed their robot to dismount from the utility vehicle they’d been using to test its driving skills; once it had transferred all its weight to one foot, they started jumping up and down on the vehicle’s fenders. The robot maintained its balance.
Human factors
For several of the robot’s tasks, the MIT researchers exploited the fact that the contest allowed human operators to communicate with their robots — although their communication links would be erratic.
Although the robot has an onboard camera, its chief sensor is a laser rangefinder, which fires pulses of light in different directions and measures the time they take to return. This produces a huge cloud of individual points — some of which belong to the same objects, and some of which don’t. Resolving that point cloud into distinct objects is an extremely difficult task, which computer vision researchers have been wrestling with for decades. It would be almost impossible to perform in real time.
So the MIT researchers built a library of generic geometric representations of objects the robot was likely to encounter — such as the fallen lumber whose removal was one of its tasks during the competition finals. The remote operator can look at an image captured by the robot’s camera, identify the appropriate library of object representations, and superimpose the point cloud produced by the laser rangefinder. Then the operator clicks the track pad or mouse button twice to roughly indicate the ends of the objects in the image. Algorithms then automatically cluster points together according to the geometric models, picking out the individual objects that the robot will have to manipulate.
When the robot enters a new environment, its rangefinder readings can tell it where nearby objects are. But it doesn’t know which are safe to step on. So the MIT researchers also developed an interface that allows the robot’s operator to click on a graphical representation of the robot’s surroundings, identifying flat surfaces that offer secure footholds.
From the robot’s sensor readings, the algorithm automatically determines the extent of the safe areas, by locating the first significant changes in altitude. So if the operator clicks at a single point on an uncluttered floor, the interface highlights an expanse of space that extends outward from that point to the first obstacles the rangefinder registers. Similarly, if the operator clicks a single point on one step of a staircase, the algorithm highlights most of the rest of the step, but stops short of its edges.
“If you look at what happened at DRC [DARPA Robotics Challenge], it was a lot of teleoperation, a lot of scripted pieces of movement, and then a human telling the robot which movement to execute in great detail,” says Emanuel Todorov, an associate professor of electrical engineering and computer science at the University of Washington. “Humans are smart, and at least for the time being, if you put them in the loop, they outperform the autonomous controllers Russ and others built. But eventually it’s going to turn the other way around, because these are complicated machines, and there’s only so much a human can figure out in real time. The approach that Russ was taking was in some sense the right approach. This is what robotics should look like five or 10 years from now.”
Suggested Items
Rules of Thumb: Design007 Magazine, November 2024
11/11/2024 | I-Connect007 Editorial TeamRules of thumb are everywhere, but there may be hundreds of rules of thumb for PCB design. They’re built on design formulas, fabricators’ limitations, and tribal knowledge. And unfortunately, some longtime rules of thumb should be avoided at all costs. How do we separate the wheat from the chaff, so to speak?
Connect the Dots: Best Practices for Prototyping
09/21/2023 | Matt Stevenson -- Column: Connect the DotsPCB prototyping is a critical juncture during an electronic device’s journey from concept to reality. Regardless of a project’s complexity, the process of transforming a design into a working board is often enlightening in terms of how a design can be improved before a PCB is ready for full production.
The Drive Toward UHDI and Substrates
09/20/2023 | I-Connect007 Editorial TeamPanasonic’s Darren Hitchcock spoke with the I-Connect007 Editorial Team on the complexities of moving toward ultra HDI manufacturing. As we learn in this conversation, the number of shifting constraints relative to traditional PCB fabrication is quite large and can sometimes conflict with each other.
Asia/Pacific AI Spending Surge to Reach a Projected $78 Billion by 2027
09/19/2023 | IDCAsia/Pacific spending on Artificial Intelligence (AI) ), including software, services, and hardware for AI-centric systems will grow to $78.4 billion in 2027, according to International Data Corporation's latest Worldwide Artificial Intelligence Spending Guide.
Intel to Sell Minority Stake in IMS Nanofabrication Business to TSMC
09/13/2023 | IntelIntel Corporation announced that it has agreed to sell an approximately 10% stake in the IMS Nanofabrication business to TSMC. TSMC’s investment values IMS at approximately $4.3 billion, consistent with the valuation of the recent stake sale to Bain Capital Special Situations.