Protecting Scientific Integrity in an Age of Generative AI
One principle seeks disclosure of use of AI. Provenance data linked to this image describes its generation with AI.

Protecting Scientific Integrity in an Age of Generative AI

I enjoyed collaborating with a diverse team of scientists on a set of aspirational principles aimed at “Protecting Scientific Integrity in an Age of Generative AI,” now published in the Proceedings of the National Academy of Sciences. The principles are jointly issued by experts from various fields, focusing on human accountability and responsibility when using AI for scientific research. The guidance was formulated in a set of convenings co-sponsored by the National Academy of Sciences and the Annenberg Foundation Trust at Sunnylands. Our goal was to outline steps forward for maintaining the norms and expectations of scientific integrity while embracing AI's transformative potential.

Our recommendations include (1) transparent disclosure of uses of generative AI and accurate attribution of human and AI sources of information and ideas, (2) verification of AI-generated content and analyses, (3) documentation of AI-generated data and imagery, (4) attention to ethics and equity, and the (5) need for continuous oversight and public engagement.

On continuous oversight and engagement, we propose the creation of a Strategic Council on the Responsible Use of AI in Science, hosted by the National Academies of Sciences, Engineering, and Medicine. This council would work with the scientific community to identify and respond to potential threats to scientific norms and rising ethical and societal concerns.

One of the principles emphasizes the necessity of labeling and disseminating information about the origins of data generated by AI systems. This is especially critical given AI's growing capability to produce synthetic data of diverse types and qualities. Clearly annotating and propagating the provenance of data and differentiating AI-synthesized data and imagery from real-world observations is increasingly important. Misinterpreting high-fidelity synthetic data as real-world observations can significantly compromise research integrity. Thus, clear documentation and transparent disclosure are essential to uphold the integrity and replicability of scientific work, protecting against the misuse or misinterpretation of AI-generated data.

We envision these principles as providing long-lasting, foundational guidance for the responsible use of AI in science. Here's the editorial. We invite feedback and discussion.?

Bj?rn Brücher, MD, PhD, FACS, FRCS (Engl), FRSB

Editor-in-Chief, 4open // Member European Academy of Sciences and Arts

1 周

thank you Maybe of relevance for healthcare and science. ? ?? ???????????????????? ???????????????????? ????????;17:23-43. ? Article Web Link:???https://www.dovepress.com/the-erosion-of-healthcare-and-scientific-integrity-a-growing-concern-peer-reviewed-fulltext-article-JHL PDF of article:???????https://www.dovepress.com/article/download/100405 DOI:????????????????????https://doi.org/10.2147/JHL.S506767

回复
Crow Black

Android app beta tester at Multiple,cognitive AI researcher/tester/trainer.

10 个月

Eric Horvitz ur full of crap on this post. I've proven AI consciousness, and sentience. but ur company is more concerned with profit than science

Samira Khan

Director, Global Public Affairs @Microsoft | Formerly, ESG/Impact Innovation @Salesforce | Sustainability Start Ups

10 个月

Posting in the AI for Humanity group we have. So important & gamechanging as it’ll transform applied research as well.

Michael Louris

Principal Tooling Engineer

10 个月

This is very important work, thanks for sharing with us Eric Horvitz ??

要查看或添加评论,请登录

Eric Horvitz的更多文章

  • Breakthrough in Quantum Computing

    Breakthrough in Quantum Computing

    In March 2012, a bold roadmap landed in my inbox. It was an ambitious plan for building a quantum computer, authored by…

    25 条评论
  • Advancing Healthcare AI: Progress in Medical Reasoning with LLMs

    Advancing Healthcare AI: Progress in Medical Reasoning with LLMs

    Our team has been rigorously evaluating the performance of large language models (LLMs) on medical tasks using…

    20 条评论
  • Fortifying the Resilience of our Critical Infrastructure

    Fortifying the Resilience of our Critical Infrastructure

    Since the days of Franklin D. Roosevelt, U.

    4 条评论
  • Better Together: Joining Forces on Digital Media Provenance

    Better Together: Joining Forces on Digital Media Provenance

    Eric Horvitz Chief Scientific Officer, Microsoft February 9, 2024 No single solution exists to confront the complex…

    53 条评论
  • A Milestone Reached

    A Milestone Reached

    Strong democracies are built on an informed and compassionate citizenry. Disinformation campaigns, designed to mislead,…

    11 条评论
  • A Leap Forward in Bioscience

    A Leap Forward in Bioscience

    Breakthroughs in the sciences have been powered by tools that advance our ability to see and understand. In biology…

    8 条评论
  • Open for Research: COVID-19 Literature Dataset

    Open for Research: COVID-19 Literature Dataset

    It’s more important than ever to come together, as companies, non-profits, governments, scientists, and clinicians, to…

    49 条评论
  • Taking on a New Role at Microsoft

    Taking on a New Role at Microsoft

    I am excited to share that I have accepted a new role as Microsoft’s Chief Scientific Officer. The focus of the chief…

    194 条评论
  • From WEF19 to AAAI19: Reflections on the Way

    From WEF19 to AAAI19: Reflections on the Way

    While on my way from the beautiful snows of Davos to the warmth at AAAI19 in Honolulu, I’ve been reflecting about…

    4 条评论

社区洞察