Benchmarking machine learning frameworks

Mon, Nov 6, 2023

For the successful integration, development, and maintenance of machine learning frameworks, benchmarking is of paramount importance. Developers or maintainers should be able to effortlessly gauge how their frameworks perform compared to other implementations, prior code versions, or across different boards, with respect to both runtime performance and other metrics. Despite this clear need, there exist relatively few attempts to address the issue [1]–[3]. Further, existing attempts are generally not very automated, meaning they have high maintenance requirements and thus are of limited utility in the context of software development workflows.

At Collabora, we developed MLBench, a flexible benchmarking system for the needs of benchmarking machine learning frameworks across different backends and boards. Given some implementations of models, sets of parameters with which to run each model, and a set of datasets, the system will automatically gather benchmarking information for each combination of parameters and datasets, and store it in a database. This benchmarking information is not limited to runtimes but also includes other metrics such as power consumption, memory and CPU usage, and precision.

Read the full article here.

Get in touch

Park Management

Reka Vavrinecz
T: +44 (0)1223 347 077

Email Me

Innovation Centre

Miranda Edwards
T: +44 (0)1223 420 252

Email Me


Ross Hemmings
T: +44 (0)1223 347 254

Email Me