Quick Memory
August 10, 2018 | University of Texas at ArlingtonEstimated reading time: 2 minutes

Computer memory capacity has expanded greatly, allowing machines to access data and perform tasks very quickly, but accessing the computer’s central processing unit, or CPU, for each task slows the machine and negates the gains that a large memory provides.
Song Jiang, a UTA associate professor in the Department of Computer Science and Engineering, received an NSF grant to improve computer accessibility.
To counteract this issue, which is known as a memory wall, computers use a cache, which is a hardware component that stores recently accessed data that has already been accessed so that it can be accessed faster in the future. Song Jiang, an associate professor in the Department of Computer Science and Engineering at The University of Texas at Arlington, is using a three-year, $345,000 grant from the National Science Foundation to explore how to make better use of the cache by allowing programmers to directly access it in software.
“Efficient use of a software-defined cache allows quick access to data along with large memory. With memory becoming more expansive, we need to involve programmers to make it more efficient. The programmer knows best how to use the cache for a particular application, so they can add efficiency without making the cache a burden,” Jiang said.
When a computer accesses its memory, it must go through the index of all the data stored there, and it must do so each time it goes back to the memory. Each step slows the process. With a software-defined cache, the computer can combine or skip steps to access the data it needs automatically without having to go through the memory from the beginning each time. Jiang has studied these issues for several years and has developed four prototypes which he will test to determine if they can serve large memories without slowing CPU speeds at the same time.
The current trend in technology is toward using NVM or non-volatile memory. NVM is expected to be of much higher density, larger and less expensive, and will provide many terabytes of memory. Speeds will not change much, but the size will expand greatly, which will also increase the time necessary to go through the index. If Jiang is successful, speeds will keep pace with technology.
“As we ask our computer systems to work with increasingly large data sets, speed becomes an issue. Dr. Jiang’s work could provide a breakthrough in how software developers approach software-derived caches and, as a result, make it easier and less time-consuming to analyze big data,” Hong Jiang said.
Written by Jeremy Agor
Suggested Items
AI Chips for the Data Center and Cloud Market Will Exceed US$400 Billion by 2030
05/09/2025 | IDTechExBy 2030, the new report "AI Chips for Data Centers and Cloud 2025-2035: Technologies, Market, Forecasts" from market intelligence firm IDTechEx forecasts that the deployment of AI data centers, commercialization of AI, and the increasing performance requirements from large AI models will perpetuate the already soaring market size of AI chips to over US$400 billion.
ZenaTech’s ZenaDrone Tests Proprietary Camera Enabling IQ Nano Drone Swarms for US Defense Applications, Blue UAS Submission
05/09/2025 | Globe NewswireZenaTech, Inc., a technology company specializing in AI (Artificial Intelligence) drones, Drone as a Service (DaaS), enterprise SaaS, and Quantum Computing solutions, announces that its subsidiary ZenaDrone is testing a new proprietary specialized camera that enables more efficient indoor applications such as inventory and security management, when utilizing IQ Nano drone swarms for commercial and US defense applications.
New Issue of Design007 Magazine: Are Your Data Packages Less Than Ideal?
05/09/2025 | I-Connect007 Editorial TeamWhy is it so difficult to create the ideal data package? Many of these simple errors can be alleviated by paying attention to detail—and knowing what issues to look out for. So, this month, our experts weigh in on the best practices for creating the ideal data package for your design.
Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems
05/08/2025 | Cadence Design SystemsAt its annual flagship user event, CadenceLIVE Silicon Valley 2025, Cadence announced a major expansion of its Cadence® Millennium™ Enterprise Platform with the introduction of the new Millennium M2000 Supercomputer featuring NVIDIA Blackwell systems, which delivers AI-accelerated simulation at unprecedented speed and scale across engineering and drug design workloads.
IPC White Paper Maps the Regulatory Terrain for Electronics Suppliers in E-Mobility Sector
05/07/2025 | IPCElectronics suppliers supporting the rapidly growing e-mobility sector are facing a dramatic escalation in environmental and social governance (ESG) compliance expectations. A new white paper from IPC’s e-Mobility Quality and Reliability Advisory Group provides a comprehensive overview of the evolving regulatory landscape and outlines the data infrastructure needed to stay ahead.