-
- News
- Books
Featured Books
- smt007 Magazine
Latest Issues
Current IssueSoldering Technologies
Soldering is the heartbeat of assembly, and new developments are taking place to match the rest of the innovation in electronics. There are tried-and-true technologies for soldering. But new challenges in packaging, materials, and sustainability may be putting this key step in flux.
The Rise of Data
Analytics is a given in this industry, but the threshold is changing. If you think you're too small to invest in analytics, you may need to reconsider. So how do you do analytics better? What are the new tools, and how do you get started?
Counterfeit Concerns
The distribution of counterfeit parts has become much more sophisticated in the past decade, and there's no reason to believe that trend is going to be stopping any time soon. What might crop up in the near future?
- Articles
- Columns
Search Console
- Links
- Media kit
||| MENU - smt007 Magazine
Intel oneDNN AI Optimizations Enabled as Default in TensorFlow
May 26, 2022 | IntelEstimated reading time: 2 minutes
In the latest release of TensorFlow 2.9, the performance improvements delivered by the Intel® oneAPI Deep Neural Network Library (oneDNN) are turned on by default. This applies to all Linux x86 packages and for CPUs with neural-network-focused hardware features (like AVX512_VNNI, AVX512_BF16, and AMX vector and matrix extensions that maximize AI performance through efficient compute resource usage, improved cache utilization and efficient numeric formatting) found on 2nd Gen Intel Xeon Scalable processors and newer CPUs. These optimizations enabled by oneDNN accelerate key performance-intensive operations such as convolution, matrix multiplication and batch normalization, with up to 3 times performance improvements compared to versions without oneDNN acceleration.
“Thanks to the years of close engineering collaboration between Intel and Google, optimizations in the oneDNN library are now default for x86 CPU packages in TensorFlow. This brings significant performance acceleration to the work of millions of TensorFlow developers without the need for them to change any of their code. This is a critical step to deliver faster AI inference and training and will help drive AI Everywhere,” said Wei Li, Intel vice president and general manager of AI and Analytics.
oneDNN performance improvements becoming available by default in the official TensorFlow 2.9 release will enable millions of developers who already use TensorFlow to seamlessly benefit from Intel software acceleration, leading to productivity gains, faster time to train and efficient utilization of compute. Additional TensorFlow-based applications, including TensorFlow Extended, TensorFlow Hub and TensorFlow Serving also have the oneDNN optimizations. TensorFlow has included experimental support for oneDNN since TensorFlow 2.5.
oneDNN is an open source cross-platform performance library of basic deep learning building blocks intended for developers of deep learning applications and frameworks. The applications and frameworks that are enabled by it can then be used by deep learning practitioners. oneDNN is part of?oneAPI, an open, standards-based, unified programming model for use across CPUs as well as GPUs and other AI accelerators.
While there is an emphasis placed on AI accelerators like GPUs for machine learning and, in particular, deep learning, CPUs continue to play a large role across all stages of the AI workflow. Intel’s extensive software-enabling work makes AI frameworks, such as the TensorFlow platform, and a wide range of AI applications run faster on Intel hardware that is ubiquitous across most personal devices, workstations and data centers. Intel’s rich portfolio of optimized libraries, frameworks and tools serves end-to-end AI development and deployment needs while being built on the foundation of oneAPI.
The oneDNN-driven accelerations to TensorFlow deliver remarkable performance gains that benefit applications spanning natural language processing, image and object recognition, autonomous vehicles, fraud detection, medical diagnosis and treatment and others.
Deep learning and machine learning applications have exploded in number due to increases in processing power, data availability and advanced algorithms. TensorFlow has been one of the world’s most popular platforms for AI application development with over 100 million downloads. Intel-optimized TensorFlow is available both as a standalone component and through the Intel oneAPI AI Analytics Toolkit, and is already being used across a broad range of industry applications including the Google Health project, animation filmmaking at Laika Studios, language translation at Lilt, natural language processing at IBM Watson and many others.
Suggested Items
Compal Adopts Intel Tech for Innovative Liquid Cooling Solutions
12/26/2024 | Compal Electronics Inc.Compal Electronics, a leading server solution provider, announced today its collaboration with Intel, BP Castrol (Castrol), JWS, and Priver to launch a groundbreaking liquid cooling solution based on Intel’s Targeted Flow technology. Designed specifically for high-density servers and AI data centers, this innovative solution aims to drive the industry toward a more efficient and sustainable future.
Murata to Acquire Sensoride Corporation
12/25/2024 | MurataMurata Manufacturing Co., Ltd. announces that Murata Electronics North America, Inc., a subsidiary of Murata, has entered into an agreement to acquire Sensoride Corporation (hereinafter referred to as 'Sensoride') on December 20, 2024, U.S. time.
Foxconn Announces Strategic Partnership With Zettabyte to Transform AI Data Centers
12/25/2024 | FoxconnZettabyte, a global leader in AI data center software and infrastructure solutions, is proud to announce a strategic partnership with Hon Hai Technology Group (Foxconn), the world's largest electronics manufacturer.
The Knowledge Base: The Era of Advanced Packaging
12/23/2024 | Mike Konrad -- Column: The Knowledge BaseThe semiconductor industry is at a pivotal juncture. As the traditional scaling predicted by Moore's Law encounters significant physical and economic barriers, transistor density can no longer double every two years without escalating costs and complications. As a result, the industry is shifting its focus from chip-level advancements to innovative packaging and substrate technologies. I Invited Dr. Nava Shpaisman, strategic collaboration manager at KLA, to provide some insight.
Siemens Extends Veloce Hardware-assisted Verification Support of EPGM Ethernet to 1.6 Tbps
12/18/2024 | SiemensSiemens Digital Industries Software announced today an extension of its Veloce™ hardware-assisted verification platform to support 1.6 Tbps Ethernet. As a core component of the Siemens software/hardware and system validation platform, Veloce delivers complete virtual models to support Ethernet Packet Generator and Monitor (EPGM) Ethernet port speeds up to 1.6 Tbps.