-
- News
- Books
Featured Books
- pcb007 Magazine
Latest Issues
Current IssueVoices of the Industry
We take the pulse of the PCB industry by sharing insights from leading fabricators and suppliers in this month's issue. We've gathered their thoughts on the new U.S. administration, spending, the war in Ukraine, and their most pressing needs. It’s an eye-opening and enlightening look behind the curtain.
The Essential Guide to Surface Finishes
We go back to basics this month with a recount of a little history, and look forward to addressing the many challenges that high density, high frequency, adhesion, SI, and corrosion concerns for harsh environments bring to the fore. We compare and contrast surface finishes by type and application, take a hard look at the many iterations of gold plating, and address palladium as a surface finish.
It's Show Time!
In this month’s issue of PCB007 Magazine we reimagine the possibilities featuring stories all about IPC APEX EXPO 2025—covering what to look forward to, and what you don’t want to miss.
- Articles
- Columns
Search Console
- Links
- Media kit
||| MENU - pcb007 Magazine
Intel Gaudi, Xeon and AI PC Accelerate Meta Llama 3 GenAI Workloads
April 22, 2024 | Intel CorporationEstimated reading time: 2 minutes

Meta launched Meta Llama 3, its next-generation large language model (LLM). Effective on launch day, Intel has validated its AI product portfolio for the first Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.
“Intel actively collaborates with the leaders in the AI software ecosystem to deliver solutions that blend performance with simplicity. Meta Llama 3 represents the next big iteration in large language models for AI. As a major supplier of AI hardware and software, Intel is proud to work with Meta to take advantage of models such as Llama 3 that will enable the ecosystem to develop products for cutting-edge AI applications,” said Wei Li, Intel vice president and general manager of AI Software Engineering.
As part of its mission to bring AI everywhere, Intel invests in the software and AI ecosystem to ensure that its products are ready for the latest innovations in the dynamic AI space. In the data center, Intel Gaudi and Intel Xeon processors with Intel® Advanced Matrix Extension (Intel® AMX) acceleration give customers options to meet dynamic and wide-ranging requirements.
Intel Core Ultra processors and Intel Arc graphics products provide both a local development vehicle and deployment across millions of devices with support for comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch® used for local research and development and OpenVINO™ toolkit for model development and inference.
About the Llama 3 Running on Intel:
Intel’s initial testing and performance results for Llama 3 8B and 70B models use open source software, including PyTorch, DeepSpeed, Intel Optimum Habana library and Intel Extension for PyTorch to provide the latest software optimizations. For more performance details, visit the Intel Developer Blog.
Intel® Gaudi® 2 accelerators have optimized performance on Llama 2 models – 7B, 13B and 70B parameters – and now have initial performance measurements for the new Llama 3 model. With the maturity of the Intel Gaudi software, Intel easily ran the new Llama 3 model and generated results for inference and fine tuning. Llama 3 is also supported on the recently announced Intel® Gaudi® 3 accelerator.
Intel Xeon processors address demanding end-to-end AI workloads, and Intel invests in optimizing LLM results to reduce latency. Intel® Xeon® 6 processors with Performance-cores (code-named Granite Rapids) show a 2x improvement on Llama 3 8B inference latency compared with 4th Gen Intel® Xeon® processors and the ability to run larger language models, like Llama 3 70B, under 100ms per generated token.
Intel Core Ultra and Intel Arc Graphics deliver impressive performance for Llama 3. In an initial round of testing, Intel Core Ultra processors already generate faster than typical human reading speeds. Further, the Intel® Arc™ A770 GPU has Xe Matrix eXtensions (XMX) AI acceleration and 16GB of dedicated memory to provide exceptional performance for LLM workloads.
What’s Next: In the coming months, Meta expects to introduce new capabilities, additional model sizes and enhanced performance. Intel will continue to optimize performance for its AI products to support this new LLM.
Suggested Items
Intel Restructures Leadership Under New CEO Lip-Bu Tan
04/22/2025 | I-Connect007 Editorial TeamIntel has announced a reorganization of its executive leadership under newly appointed CEO Lip-Bu Tan. Several core divisions—including the Data Center and AI Group and the Client Computing Group—will now report directly to Tan, streamlining the company’s reporting structure.
Intel Announces Strategic Investment by Silver Lake in Altera
04/14/2025 | IntelIntel Corporation announced that it has entered into a definitive agreement to sell 51% of its Altera business to Silver Lake, a global leader in technology investing.
Murata Selected as Clarivate Top 100 Global Innovator for 4th Year in a Row
04/11/2025 | MurataMurata Manufacturing Co., Ltd. was selected as one of the Clarivate Top 100 Global Innovators 2025 by Clarivate Plc in March 2025. This is the fourth consecutive year that Murata has received this recognition.
Gartner Identifies Top 12 Early-Stage Technology Disruptions that Will Define the Future of Business Systems
04/07/2025 | Gartner, Inc.Gartner, Inc. has identified 12 emerging technology disruptions that will define the future of business systems. Technology leaders must prioritize these over the next five years, as they present competitive opportunities in the near term and will eventually grow to become standard throughout businesses.
SMT Perspectives and Prospects: Artificial Intelligence, Part 5: Brain, Mind, Intelligence
04/09/2025 | Dr. Jennie Hwang -- Column: SMT Perspectives and ProspectsCurrently, there is no single established test that can authoritatively measure artificial or human intelligence. Humans generate data, acquire information, and translate it into knowledge. Cumulative knowledge builds intelligence. As such, it is plausible to define human intelligence as the capacity to acquire knowledge and the ability to apply it to achieve desired outcomes.