How AI is struggling
Recently the Nature journal has published a damning response to a study from Google Health written by dozens of scientists regularly appearing in the journal. Google was promoting its successful new trial of an AI detecting breast cancer on medical images. The reason for critics was almost no information published about the code or the testing methodologies, which overall looked like another fancy Big tech promotion rather, than a real-world new discovery.
A growing number of scientists is standing against the lack of transparency in AI research. Science is supposed to rely upon trust, which involves sharing on how research is carried enabling other researchers to build and expand on it by creating the new outcomes.
In reality, a relatively small number of studies gets fully replicated because researchers are more interested in the new experiments producing new outcomes. When it comes to precision sciences, including computer science, researchers are expected to provide the data required to repeat and rerun the same experiments.
AI is on fire for a variety of reasons. On one hand, it is a new technology that went massive over the past decade and became an experimental science just few years ago.
The shortage of transparency interferes the new AI models from a proper assessment for bias and consistency. On another hand, AI is moving rapidly to real-world applications, impacting people and their lives. A machine-learning model working well in the lab may fail in life and create risky scenarios for well-being of the people. Therefore, a stronger foundation for AI advancement lays in research replication.