Speeding Up the Machine Learning Process
October 17, 2019 | KAUSTEstimated reading time: 4 minutes
At a time when big data reigns supreme, training machine learning algorithms to perform certain tasks is often costly and time-consuming. At KAUST’s Visual Computing Center, computer scientist Peter Richtárik and his colleagues have developed a new method for training models with greater efficiency, accuracy and flexibility.
Their method, known as the arbitrary sampling paradigm, provides a shortcut for training machine learning algorithms that use large datasets, which usually take huge amounts of computing power to process. The approach allows practitioners to pinpoint the most useful subset to work with and its optimal size for a given scenario.
“The method specifies which functions in the dataset should be sampled more often and by how much,” says Richtárik. “This means that practitioners can choose the procedure that works best for them, whether they are using a single computer or a distributed network.”
In standard machine learning procedures, algorithms learn to model datasets by working through the data in repeated steps. In each step, the computer reads every data point before updating and improving its performance in the following step. When analyzing images, for example, the computer must process each pixel in each step in order to reach the closest approximation of the actual data.
To evaluate how accurately an algorithm is modeling the dataset, loss functions are used to measure the amount of error, or loss of data, in the model. The average loss of these data needs to be minimized to ensure the machine learning model is mirroring the actual dataset accurately.
“That’s an astronomical number of data points for the computer to process. It’s also extremely costly and takes a lot of time,” says Richtárik.
With his colleague Xun Qian, Richtárik applied the arbitrary sampling paradigm to stochastic gradient descent (SGD), a widely used machine learning algorithm that works in uniform steps to map out the steepest path to optimal performance.
Using arbitrary sampling, the researchers were able to plot the steepest direction using smaller subsets of the data and used this to calculate a rough approximation of the correct direction. As a result, they were able to train the algorithm more efficiently than the standard approach of processing the entire dataset.
Another problem Qian and Richtárik tackled with arbitrary sampling was finding which subset, or minibatch, size works most efficiently. In most cases, practitioners work with one subset size through the whole training process, but efficiency tends to drop after reaching a peak value. Qian and Richtarik were able to cut through this process by pinpointing which subset size will work fastest.
“With arbitrary sampling, we show that improving the minibatch size can improve performance,” says Qian. “This means that larger steps can be applied, leading to faster convergence.”
In another analysis2, Qian and Richtárik applied arbitrary sampling to System for Automated Geoscientific Analyses (SAGA), a variance-reduced algorithm that stores information acquired from each training step. While SAGA has faster convergence rates than SGD, developing a version that can be adapted to different situations has remained a challenge.
With the arbitrary sampling paradigm, Qian and Richtárik were able to speed up the convergence rate and reduce the number of steps in the process by improving the minibatch size and choosing the most effective subset.
“This is the first time we were able to analyze the SAGA algorithm faster with the arbitrary sampling paradigm,” says Richtárik. “Before this approach, the algorithm uniformly sampled the data one point at a time at random.”
While arbitrary sampling improved the performance of convex versions of SGD and SAGA, which are relatively quick and simple to train, Richtárik and his Ph.D. student Samuel Horvath wanted to test the approach on nonconvex models, which are commonly used in deep neural networks. These types of algorithms are more powerful than convex models, but more difficult to train.
Using nonconvex versions of SAGA, and two other methods, stochastic variance reduced gradient (SVRG) and StochAstic Recursive grAdient algorithm (SARAH), Horvath and Richtárik were able to calculate the optimal sampling approach for each of the algorithms, speeding up the training process by an order of magnitude3. The results were presented at the International Conference on Machine Learning earlier this year.
“We tested our new methods on a few real-world datasets and models, and we were able to show an improvement not just in theory, but in practice,” says Horvath. “Our approach can be used on many variance-reduced models to make the training process faster.”
The next step for Richtárik and his team is to develop a way to mathematically unify arbitrary sampling with different algorithms, including quantized methods.
“We have taken a big step, but it’s only the first,” says Richtárik. “There is a whole world that is even more general than arbitrary sampling to explore in the next few years.”
Testimonial
"We’re proud to call I-Connect007 a trusted partner. Their innovative approach and industry insight made our podcast collaboration a success by connecting us with the right audience and delivering real results."
Julia McCaffrey - NCAB GroupSuggested Items
The Marketing Minute: Marketing With Layers
10/15/2025 | Brittany Martin -- Column: The Marketing MinuteMarketing to a technical audience is like crafting a multilayer board: Each layer serves a purpose, from the surface story to the buried detail that keeps everything connected. At I-Connect007, we’ve learned that the best marketing campaigns aren’t built linearly; they’re layered. A campaign might start with a highly technical resource, such as an in-depth article, a white paper, or a podcast featuring an engineer delving into the details of a process. That’s the foundation, the substance that earns credibility.
Taking Control of PCB Verification One Step at a Time
10/09/2025 | Kirk Fabbri, Siemens EDAToday’s designs are as complex as ever, and engineers face tough decisions every day. Simulation and verification teams are confronted with a three-fold challenge: understanding the underlying theory, mastering the tools, and applying best practices.Engineers need to navigate a vast and ever-changing cast of design and simulation tools, often with overlapping functionality.
Happy’s Tech Talk #43: Engineering Statistics Training With Free Software
10/06/2025 | Happy Holden -- Column: Happy’s Tech TalkIn over 50 years as a PCB process engineer, the one skill I acquired in college that has been most beneficial is engineering statistics. Basic statistics was part of my engineering fundamentals classes, but I petitioned the dean to let me take the engineering statistics graduate course because I was creating a senior thesis for my honors focus and needed more training on Design of Experiments (DOE).
Connect the Dots: Evolution of PCB Manufacturing—Lamination
10/02/2025 | Matt Stevenson -- Column: Connect the DotsWhen I wrote The Printed Circuit Designer's Guide to...™ Designing for Reality, it was not a one-and-done effort. Technology is advancing rapidly. Designing for the reality of PCB manufacturing will continue to evolve. That’s why I encourage designers to stay on top of the tools and processes used during production, to ensure their designs capitalize on the capabilities of their manufacturing partner.
Empower Sets New Benchmark with 20x Faster Response and Breakthrough Sustainability Demonstrated at OCP Global Summit 2025
09/25/2025 | Empower SemiconductorEmpower Semiconductor, the world leader in powering AI-class processors, announced that its Crescendo chipset, an artificial intelligence (AI) and high-performance computing (HPC) processor true vertical power delivery platform, is available now for final sampling, with mass production slated for late 2025.