DeepFly3D: The Deep-Learning Way to Design Fly-Like Robots
October 11, 2019 | EPFLEstimated reading time: 2 minutes

EPFL scientists have developed a deep-learning based motion-capture software that uses multiple camera views to model the movements of a fly in three dimensions. The ultimate aim is to use this knowledge to design fly-like robots.
“Just think about what a fly can do,” says Professor Pavan Ramdya, whose lab at EPFL’s Brain Mind Institute, with the lab of Professor Pascal Fua at EPFL’s Institute for Computer Science, led the study. “A fly can climb across terrain that a wheeled robot would not be able to.”
Flies aren’t exactly endearing to humans. We rightly associate them with less-than-appetizing experiences in our daily lives. But there is an unexpected path to redemption: Robots. It turns out that flies have some features and abilities that can inform a new design for robotic systems.
“Unlike most vertebrates, flies can climb nearly any terrain,” says Ramdya. “They can stick to walls and ceilings because they have adhesive pads and claws on the tips of their legs. This allows them to basically go anywhere. That's interesting also because if you can rest on any surface, you can manage your energy expenditure by waiting for the right moment to act.”
It was this vision of extracting the principles that govern fly behavior to inform the design of robots that drove the development of DeepFly3D, a motion-capture system for the fly Drosophila melanogaster, a model organism that is nearly ubiquitously used across biology.
In Ramdya’s experimental setup, a fly walks on top of a tiny floating ball—like a miniature treadmill—while seven cameras record its every movement. The fly’s top side is glued onto an unmovable stage so that it always stays in place while walking on the ball. Nevertheless, the fly “believes” that it is moving freely.
The collected camera images are then processed by DeepFly3D, a deep-learning software developed by Semih Günel, a PhD student working with both Ramdya’s and Fua’s labs. “This is a fine example of where an interdisciplinary collaboration was necessary and transformative,” says Ramdya. “By leveraging computer science and neuroscience, we’ve tackled a long-standing challenge.”
Different poses of the fruit fly Drosophila melanogaster are captured by multiple cameras and processed with the DeepFly3D software. Credit: P. Ramdya, EPFL.
What’s special about DeepFly3D is that is can infer the 3D pose of the fly—or even other animals—meaning that it can automatically predict and make behavioral measurements at unprecedented resolution for a variety of biological applications. The software doesn’t need to be calibrated manually and it uses camera images to automatically detect and correct any errors it makes in its calculations of the fly’s pose. Finally, it also uses active learning to improve its own performance.
DeepFly3D opens up a way to efficiently and accurately model the movements, poses, and joint angles of a fruit fly in three dimensions. This may inspire a standard way to automatically model 3D pose in other organisms as well.
“The fly, as a model organism, balances tractability and complexity very well,” says Ramdya. “If we learn how it does what it does, we can have important impact on robotics and medicine and, perhaps most importantly, we can gain these insights in a relatively short period of time.”
Suggested Items
DownStream Acquisition Fits Siemens’ ‘Left-Shift’ Model
06/26/2025 | Andy Shaughnessy, I-Connect007I recently spoke to DownStream Technologies founder Joe Clark about the company’s acquisition by Siemens. We were later joined by A.J. Incorvaia, Siemens’ senior VP of electronic board systems. Joe discussed how he, Rick Almeida, and Ken Tepper launched the company in the months after 9/11 and how the acquisition came about. A.J. provides some background on the acquisition and explains why the companies’ tools are complementary.
Elementary Mr. Watson: Retro Routers vs. Modern Boards—The Silent Struggle on Your Screen
06/26/2025 | John Watson -- Column: Elementary, Mr. WatsonThere's a story about a young woman preparing a holiday ham. Before putting it in the pan, she cuts off the ends. When asked why, she shrugs and says, "That's how my mom always did it." She asks her mother, who gives the same answer. Eventually, the question reaches Grandma, who laughs and says, "Oh, I only cut the ends off because my pan was too small." This story is a powerful analogy for how many PCB designers approach routing today.
Siemens Turbocharges Semiconductor and PCB Design Portfolio with Generative and Agentic AI
06/24/2025 | SiemensAt the 2025 Design Automation Conference, Siemens Digital Industries Software today unveiled its AI-enhanced toolset for the EDA design flow.
Cadence AI Autorouter May Transform the Landscape
06/19/2025 | Andy Shaughnessy, Design007 MagazinePatrick Davis, product management director with Cadence Design Systems, discusses advancements in autorouting technology, including AI. He emphasizes a holistic approach that enhances placement and power distribution before routing. He points out that younger engineers seem more likely to embrace autorouting, while the veteran designers are still wary of giving up too much control. Will AI help autorouters finally gain industry-wide acceptance?
Beyond Design: The Metamorphosis of the PCB Router
06/18/2025 | Barry Olney -- Column: Beyond DesignThe traditional PCB design process is often time-consuming and labor-intensive. Routing a complex PCB layout can consume up to 30% of a designer’s time, and addressing this issue is not straightforward. We have all encountered this scenario: You spend hours setting the constraints and finally hit the Go button, only to be surprised by the lack of visual appeal and the obvious flaws in the result.