-
- News
- Books
Featured Books
- smt007 Magazine
Latest Issues
Current IssueWhat's Your Sweet Spot?
Are you in a niche that’s growing or shrinking? Is it time to reassess and refocus? We spotlight companies thriving by redefining or reinforcing their niche. What are their insights?
Moving Forward With Confidence
In this issue, we focus on sales and quoting, workforce training, new IPC leadership in the U.S. and Canada, the effects of tariffs, CFX standards, and much more—all designed to provide perspective as you move through the cloud bank of today's shifting economic market.
Intelligent Test and Inspection
Are you ready to explore the cutting-edge advancements shaping the electronics manufacturing industry? The May 2025 issue of SMT007 Magazine is packed with insights, innovations, and expert perspectives that you won’t want to miss.
- Articles
- Columns
- Links
- Media kit
||| MENU - smt007 Magazine
Hon Hai Research Institute Unveils AI-enabled ModeSeQ
July 14, 2025 | Hon Hai Technology GroupEstimated reading time: 2 minutes
Hon Hai Research Institute (HHRI), an R&D powerhouse of Hon Hai Technology Group (Foxconn), the world’s largest electronics manufacturer and technology service provider, has been recognized for its competitive work in trajectory prediction in autonomous driving technology.
The landmark achievements in ModeSeq, taking top spot in the Waymo Open Dataset Challenge and presenting at CVPR 2025, among the world’s most influential AI and computer vision conferences, gathering top-tier tech firms, research institutions, and academic leaders, highlight HHRI’s growing leadership and technical excellence on the international stage.
“ModeSeq empowers autonomous vehicles with more accurate and diverse predictions of traffic participant behaviors,” said Yung-Hui Li, Director of the Artificial Intelligence Research Center at HHRI. “It directly enhances decision-making safety, reduces computational cost, and introduces unique mode-extrapolation capabilities to dynamically adjust the number of predicted behavior modes based on scenario uncertainty.”
Figure 1: Illustrates the ModeSeq workflow, showing how the model anticipates multiple possible future trajectories (highlighted by red vehicle icons and arrows). It progressively analyzes the scenario and assigns confidence scores (e.g., 0.2) to each potential path.
HHRI’s Artificial Intelligence Research Center, in collaboration with City University of Hong Kong, on June 13, presented "ModeSeq: Taming Sparse Multimodal Motion Prediction with Sequential Mode Modeling" at CVPR 2025(IEEE/CVF Conference on Computer Vision and Pattern Recognition), where its paper was among only the 22% that were accepted.
The multimodal trajectory-prediction technology overcomes the limitations of prior methods by both preserving high performance and delivering diverse potential outcome paths. ModeSeq introduces sequential pattern modeling and employs an Early-Match-Take-All (EMTA) loss function to reinforce multimodal predictions. It encodes scenes using Factorized Transformers and decodes them with a hybrid architecture combining Memory Transformers and dedicated ModeSeq layers.
The research team further refined it into Parallel ModeSeq, which claimed victory in the prestigious Waymo Open Dataset (WOD) Challenge – Interaction Prediction Track at the CVPR WAD Workshop. The team’s winning entry surpassed strong competitors from the National University of Singapore, University of British Columbia, Vector Institute for AI, University of Waterloo and Georgia Institute of Technology.
Building on their success from last year – where ModeSeq placed second globally in the 2024 CVPR Waymo Motion Prediction Challenge – this year’s Parallel ModeSeq emerged triumphant in the 2025 Interaction Prediction track.
Led by Director Li of HHRI’s AI Research Lab, in collaboration with Professor Jianping Wang’s group at City University of Hong Kong and researchers from Carnegie Mellon University, ModeSeq outperforms previous approaches on the Motion Prediction Benchmark—achieving superior mAP and soft-mAP scores while maintaining comparable minADE and minFDE metrics.
Figure 2: Director Yung-Hui Li (right) and Researcher Ming-Chien Hsu at CVPR 2025 presenting the latest advances in autonomous driving using ModeSeq.