Intel Gaudi, Xeon and AI PC Accelerate Meta Llama 3 GenAI Workloads
April 22, 2024 | Intel CorporationEstimated reading time: 2 minutes

Meta launched Meta Llama 3, its next-generation large language model (LLM). Effective on launch day, Intel has validated its AI product portfolio for the first Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.
“Intel actively collaborates with the leaders in the AI software ecosystem to deliver solutions that blend performance with simplicity. Meta Llama 3 represents the next big iteration in large language models for AI. As a major supplier of AI hardware and software, Intel is proud to work with Meta to take advantage of models such as Llama 3 that will enable the ecosystem to develop products for cutting-edge AI applications,” said Wei Li, Intel vice president and general manager of AI Software Engineering.
As part of its mission to bring AI everywhere, Intel invests in the software and AI ecosystem to ensure that its products are ready for the latest innovations in the dynamic AI space. In the data center, Intel Gaudi and Intel Xeon processors with Intel® Advanced Matrix Extension (Intel® AMX) acceleration give customers options to meet dynamic and wide-ranging requirements.
Intel Core Ultra processors and Intel Arc graphics products provide both a local development vehicle and deployment across millions of devices with support for comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch® used for local research and development and OpenVINO™ toolkit for model development and inference.
About the Llama 3 Running on Intel:
Intel’s initial testing and performance results for Llama 3 8B and 70B models use open source software, including PyTorch, DeepSpeed, Intel Optimum Habana library and Intel Extension for PyTorch to provide the latest software optimizations. For more performance details, visit the Intel Developer Blog.
Intel® Gaudi® 2 accelerators have optimized performance on Llama 2 models – 7B, 13B and 70B parameters – and now have initial performance measurements for the new Llama 3 model. With the maturity of the Intel Gaudi software, Intel easily ran the new Llama 3 model and generated results for inference and fine tuning. Llama 3 is also supported on the recently announced Intel® Gaudi® 3 accelerator.
Intel Xeon processors address demanding end-to-end AI workloads, and Intel invests in optimizing LLM results to reduce latency. Intel® Xeon® 6 processors with Performance-cores (code-named Granite Rapids) show a 2x improvement on Llama 3 8B inference latency compared with 4th Gen Intel® Xeon® processors and the ability to run larger language models, like Llama 3 70B, under 100ms per generated token.
Intel Core Ultra and Intel Arc Graphics deliver impressive performance for Llama 3. In an initial round of testing, Intel Core Ultra processors already generate faster than typical human reading speeds. Further, the Intel® Arc™ A770 GPU has Xe Matrix eXtensions (XMX) AI acceleration and 16GB of dedicated memory to provide exceptional performance for LLM workloads.
What’s Next: In the coming months, Meta expects to introduce new capabilities, additional model sizes and enhanced performance. Intel will continue to optimize performance for its AI products to support this new LLM.
Testimonial
"We’re proud to call I-Connect007 a trusted partner. Their innovative approach and industry insight made our podcast collaboration a success by connecting us with the right audience and delivering real results."
Julia McCaffrey - NCAB GroupSuggested Items
Latent AI, Wind River Collaborate to Advance Edge AI for Mission-Critical Systems
09/12/2025 | BUSINESS WIRELatent AI, a leader in edge AI solutions, and Wind River, an Aptiv company and global leader in delivering software for the intelligent edge, announced a strategic cooperation to accelerate edge AI maturity to bring artificial intelligence capabilities to the real-time edge platforms that power mission-critical infrastructure across industries.
Procense Raises $1.5M in Seed Funding to Accelerate AI-Powered Manufacturing
09/11/2025 | BUSINESS WIREProcense, a San Francisco-based industrial automation startup developing cutting-edge AI and remote sensing technologies for process manufacturers has raised $1.5 million in a seed funding round led by Kevin Mahaffey, Business Insider’s #1 seed investor of 2025 and HighSage Ventures, a Boston-based family office that primarily invests in public and private companies in the global software, internet, consumer, and financial technology sectors.
Intel Announces Key Leadership Appointments to Accelerate Innovation and Strengthen Execution
09/09/2025 | Intel CorporationIntel Corporation today announced a series of senior leadership appointments that support the company’s strategy to strengthen its core product business, build a trusted foundry, and foster a culture of engineering across the business.
Intel Collaborates with LG Innotek to Implement an AI-powered Smart Factory
09/05/2025 | IntelIntel Core and Intel Xeon processors along with Intel Arc GPUs enable a comprehensive inspection process inside LG Innotek’s factory in Gumi, Korea.
Gartner Says AI PCs Will Represent 31% of Worldwide PC Market by the End of 2025
08/29/2025 | Gartner, Inc.Artificial intelligence (AI) PCs will represent 31% of the total PC market globally by the end of 2025, according to Gartner, Inc. a business and technology insights company. Worldwide shipments of AI PCs are projected to total 77.8 million units in 2025.