Lab Manager | Run Your Lab Like a Business
A vector illustration of two white robots working in a chemistry lab with various beakers and equipment on a purple lab bench and a teal background
iStock, Vectorpower

Do AI-Driven Chemistry Labs Actually Work? New Metrics Promise Answers

Newly proposed guidelines will allow researchers to accurately assess the performance and potential of self-driving labs

by North Carolina State University
Register for free to listen to this article
Listen with Speechify
0:00
5:00

The fields of chemistry and materials science are seeing a surge of interest in “self-driving labs,” which make use of artificial intelligence (AI) and automated systems to expedite research and discovery. Researchers are now proposing a suite of definitions and performance metrics that will allow researchers, non-experts, and future users to better understand both what these new technologies are doing and how each technology is performing in comparison to other self-driving labs. Their findings have recently been published in Nature Communications.

Self-driving labs hold tremendous promise for accelerating the discovery of new molecules, materials, and manufacturing processes, with applications ranging from electronic devices to pharmaceuticals. While the technologies are still fairly new, some have been shown to reduce the time needed to identify new materials from months or years to days.

Get training in Chemical Hygiene and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Chemical Hygiene Course

“Self-driving labs are garnering a great deal of attention right now, but there are a lot of outstanding questions regarding these technologies,” says Milad Abolhasani, corresponding author of a paper on the new metrics and an associate professor of chemical and biomolecular engineering at North Carolina State University. “This technology is described as being ‘autonomous,’ but different research teams are defining ‘autonomous’ differently. By the same token, different research teams are reporting different elements of their work in different ways. This makes it difficult to compare these technologies to each other, and comparison is important if we want to be able to learn from each other and push the field forward.

“What does Self-Driving Lab A do really well? How could we use that to improve the performance of Self-Driving Lab B? We’re proposing a set of shared definitions and performance metrics, which we hope will be adopted by everyone working in this space. The end goal will be to allow all of us to learn from each other and advance these powerful research acceleration technologies.

“For example, we seem to be seeing some challenges in self-driving labs related to the performance, precision, and robustness of some autonomous systems,” Abolhasani says. “This raises questions about how useful these technologies can be. If we have standardized metrics and reporting of results, we can identify these challenges and better understand how to address them.”

At the core of the new proposal is a clear definition of self-driving labs and seven proposed performance metrics, which researchers would include in any published work related to their self-driving labs.

  • Degree of autonomy: how much guidance does a system need from users?
  • Operational lifetime: how long can the system operate without intervention from users?
  • Throughput: how long does it take the system to run a single experiment?
  • Experimental precision: how reproducible are the system’s results?
  • Material usage: what’s the total amount of materials used by a system for each experiment?
  • Accessible parameter space: to what extent can the system account for all of the variables in each experiment?
  • Optimization efficiency

“Optimization efficiency is one of the most important of these metrics, but it’s also one of the most complex—it doesn’t lend itself to a concise definition,” Abolhasani says. “Essentially, we want researchers to quantitatively analyze the performance of their self-driving lab and its experiment-selection algorithm by benchmarking it against a baseline—for example, random sampling.

“Ultimately, we think having a standardized approach to reporting on self-driving labs will help to ensure that this field is producing trustworthy, reproducible results that make the most of AI programs that capitalize on the large, high-quality data sets produced by self-driving labs,” Abolhasani says.

- This press release was originally published on the North Carolina State University website

Get to know us