Is AI-enabled science too good to be true?
Scientific Writers Ltd
Medical and technical writing professionals for scientific communication material, from peer review papers to podcasts.
The promises and risks of AI in scientific research
The processing and analytical power of artificial intelligence (AI) has led researchers to readily adopt the technology in scientific studies. AI capabilities have also inspired scientists' optimism about the technology’s potential for more efficient and effective research. However, there are concerns that this optimism can lead researchers to ignore the shortcomings of AI, such as its tendency to hallucinate and produce plausible results that are later proven false.
?
Illusions of understanding
Lisa Messeri and M. J. Crockett closely examined the risks of using AI in scientific research in their article published in Nature. In this analysis, the authors categorised visions that can render researchers susceptible to illusions of understanding, where they believe that they grasp a concept more thoroughly than they do.
The authors found that individuals who rely on AI tools to overcome cognitive limitations, subjectivity and bias tend to overestimate their understanding of a subject. This reliance creates scientific monocultures where certain methods and ideas dominate at the expense of alternative approaches, making scientific research less innovative and more prone to errors.
?
Scientists’ visions of AI
Messeri and Crockett identified four visions of AI-enabled research across various stages of scientific investigation. These visions include AI as 1) Surrogates and 2) Quants, where researchers believe AI tools can process data too large and complex for humans to collect and analyse effectively. In addition, as 3) Oracles and 4) Arbiters, scientists trust AI to search, summarise, analyse and evaluate existing literature and research findings more objectively than humans can achieve.
The authors further identified three categories of illusions of understanding that stem from these four AI visions: illusions of explanatory depth, illusions of exploratory breadth, and illusions of objectivity. For example, a scientist using an AI Oracle to generate new hypotheses from existing research may experience an illusion of exploratory breadth. In this scenario, they might mistakenly believe that they are exploring all testable hypotheses, while in fact, they are only investigating a limited set of hypotheses that the AI tools can test.
?
Not so fast
Messeri and Crockett focused only on the four visions of AI because they considered them most relevant to the risks of developing illusions of understanding. However, other visions of AI might impact a researcher’s perception of their understanding. Future studies should examine other factors that might affect the understanding of a concept, such as the level of expertise and stage of training.
领英推荐
?
Why does it matter?????
The four visions framework of AI encourages researchers to think about why they want to use AI in their research and identify areas where they may lack understanding. The study also highlights the importance of assessing the risks now while AI tools are beginning to be applied in scientific research. Addressing these risks will be more difficult once AI tools become well-established in the research process. ?
Take home messages
Guest author: Paul Jones, MSc.
This article was written as part of a series of 'journal club' summaries for Scientific Writers Ltd and is based on the following publication:
First Authors: Messeri L, et al. & M.J. Crockett
Journal: Nature
Date online: 6 March 2024
Other references: https://www.nature.com/articles/d41586-024-00639-y#ref-CR2