top of page
Search
  • Kevin Coakley

Reproducibility in AI, and What Computing Professionals Should Know for Supporting Researchers

This article was written by Kevin Coakley and Alexandra Andreiu. It was published on January 28, 2022 on the GO FAIR US website.


The “r” in FAIR— which stands for reusable— has always been confused with reproducibility. Although the two have different meanings and usable definitions, they are indeed closely related. Reproducibility goes beyond the ability to reuse data, software, or other research artifacts. It describes a core tenet of science – to be able to reproduce one’s research to validate scientific findings. This has been an area of challenge for data-driven research as published research often omits important methods needed for recreating an experiment, and access to the data or compute needed to process the data is often missing, as well. GO FAIR is especially concerned with making data machines actionable, especially as a dimension of AI-readiness. Reproducibility becomes that much more complex in the realm of machine learning and deep learning. In his talk titled “Reproducibility in AI, and What Computing Professionals Should Know for Supporting Researchers”, Kevin Coakley discusses this challenge. Kevin Coakley is a Senior Integration Engineer at the San Diego Supercomputer Center whose research focus is on reproducibility and AI.



Within Artificial Intelligence and Machine Learning, reproducibility is even more important since often times the computing environment in which an experiment was performed can greatly impact the data. Since Linux flavors use different versions of software, there is a likelihood that different flavors of Linux can produce different results depending on the software used. Generally when reproducing an AI experiment, a description of the experiment and code/data are needed; however, many times a thorough description of the computing environment used is omitted. We must quantify these inter-laboratory differences and computer architecture differences since they can greatly impact the performance of differing AI methodologies. Acknowledging these differences can assist in understanding whether further contextual adjustment is needed or whether certain experiments need to be tested in different computing environments.



bottom of page