Mobileye’s Self-Driving Secret? 200PB of Data
January 10, 2022 | IntelEstimated reading time: 2 minutes
Mobileye is sitting on a virtual treasure trove of driving data – some 200 petabytes worth. When combined with Mobileye’s state-of-the-art computer vision technology and extremely capable natural language understanding (NLU) models, the dataset can deliver thousands of results within seconds, even for incidents that fall into the “long tail” of rare conditions and scenarios. This helps the AV and state-of-the-art computer vision system handle edge cases and thereby achieve the very high mean time between failure (MTBF) rate targeted for self-driving vehicles.
“Data and the infrastructure in place to harness it is the hidden complexity of autonomous driving. Mobileye has spent 25 years collecting and analyzing what we believe to be the industry’s leading database of real-world and simulated driving experience, setting Mobileye apart by enabling highly capable AV solutions that meet the high bar for mean time between failure,” said Prof. Amnon Shashua, Mobileye president and chief executive officer.
Mobileye’s database – believed to be the world’s largest automotive dataset – comprises more than 200 petabytes of driving footage, equivalent to 16 million 1-minute driving clips from 25 years of real-world driving. Those 200 petabytes are stored between Amazon Web Services (AWS) and on-premise systems. The sheer size of Mobileye’s dataset makes the company one of AWS’s largest customers by volume stored globally.
Large-scale data labeling is at the heart of building powerful computer vision engines needed for autonomous driving. Mobileye’s rich and relevant dataset is annotated both automatically and manually by a team of more than 2,500 specialized annotators. The compute engine relies on 500,000 peak CPU cores at the AWS cloud to crunch 50 million datasets monthly – the equivalent to 100 petabytes being processed every month related to 500,000 hours of driving.
Data is only valuable if you can make sense of it and put it to use. This requires deep comprehension of natural language along with state-of-the-art computer vision, Mobileye’s long-standing strength.
Every AV player faces the “long tail” problem in which a self-driving vehicle encounters something it has not seen or experienced before. This long tail contains large datasets, but many do not have the tools to effectively make sense of it. Mobileye’s state-of-the-art computer vision technology combined with extremely capable NLU models enable Mobileye to query the dataset and return thousands of results within the long tail within seconds. Mobileye can then use this to train its computer vision system and make it even more capable. Mobileye’s approach dramatically accelerates the development cycle.
Mobileye’s team uses an in-house search engine database with millions of images, video clips and scenarios. They include anything from “tractor covered in snow” to “traffic light in low sun,” all collected by Mobileye and feeding its algorithms. (See sample images).
With access to the industry’s highest-quality data and the talent required to put it to use, Mobileye’s driving policy can make sound, informed decisions deterministically, an approach that removes the uncertainty of artificial intelligence-based decisions and yields a statistically high mean time between failure rate. At the same time, the dataset hastens the development cycle to bring the lifesaving promise of AV technology to reality more quickly.
Suggested Items
Japanese Joint Research Group win Prime Minister’s Award with Ultra High-performance Computing Platform
03/25/2024 | FujitsuA Japanese consortium of research partners including RIKEN, the National Institute of Advanced Industrial Science and Technology (AIST), the National Institute of Information and Communications Technology (NICT), Osaka University, Fujitsu Limited, and Nippon Telegraph and Telephone Corporation (NTT) have been recognized with the prestigious Prime Minister’s Award.
Fujitsu Develops Technology to Speed Up Quantum Circuit Computation in Quantum Simulator by 200 Times
02/19/2024 | FujitsuFujitsu announced the development of a novel technique on a quantum simulator that speeds up quantum-classical hybrid algorithms, which have been proposed as a method for the early use of quantum computers, achieving 200 times the computational speed of previous simulations.
Fein-Lines: CES 2024—A Tech Gadget Lover’s Dream
01/17/2024 | Dan Feinberg -- Column: Fein-Lines“Open sesame” is a magical phrase in the story of "Ali Baba and the Forty Thieves" in Antoine Galland's version of One Thousand and One Nights. In the story, this phrase opens the mouth of a cave in which 40 thieves have hidden a treasure. Attending CES was like opening the mouth to a cave of treasures. As soon as the show floor opened, I quickly realized that the two days available to me were not nearly enough. There were truly many hidden treasures now being revealed.
Purdue Offers Free Foundational Course in Semiconductor Fabrication
01/12/2024 | Purdue UniversityVirtually anything electronic has at least one semiconductor chip inside it and likely many more. From smartphones to automobiles and myriad other products and systems, the tiny devices are the physical building blocks of the digital age.
Utility-Scale Quantum Program Advances Toward Prototyping
12/29/2023 | DARPADARPA’s Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program seeks to determine whether an underexplored approach to quantum computing can achieve utility-scale operation – meaning its computational value exceeds its cost – faster than conventional predictions.