It’s time to stop guessing with AI
The use of AI is increasingly challenging for researchers, reviewers and everybody else in the peer review process. AI is increasingly used as an assistant to facilitate literature reviews, analyse datasets or identify hidden research gaps. But, in some cases, it is used to generate false images that are included in manuscripts, which can lead to withdrawal and reputational damage for publishers, institutions, and all the researchers involved—including those unaware of the unethical actions taken. Here, Dr Dror Kolodkin Gal comments on findings from an online game that shows the difficulty of spotting AI-generated images.
?
Image integrity has become more difficult to detect in recent years. It used to be easier to detect false images as they were crudely manipulated with obvious falsifications, including cuts and splices and side-by-side duplications. Image integrity analyst Jana Christopher has said these have become trickier to spot thanks to skilful tampering including the manipulation of raw data underpinning the images provided.
?
Most image integrity breaches are unintentional mistakes, however sometimes it is intentional sophisticated manipulation. Failure to detect these images can result in the rejection of papers, while post publication may lead to investigations into potential issues that can take several years – halting research and possible access to grant funding.
?
To highlight the difficulty of detecting these integrity issues, Proofig AI developed an online game where users were invited to identify which images were real and which were AI-generated microscopic images. Eight questions asked users to identify if the image presented to them were AI generated or real, with the remaining two questions each showing two images side-by-side to select which was real and which wasn’t. For each correct answer, the user received 10 points. The results made for interesting reading.
?
Very few people got full marks, with most people ranging from 40-70 points, and an average of 51.4 points out of 100. Our conclusion is most people could only guess which images were correct and which were AI-generated, further demonstrating the difficulty of identifying the images. Our feedback on social media outlined that majority of the users were those working in scientific and academic research.
领英推荐
?
To ensure that users are not guessing when it comes to using images, and to preserve image and research integrity, it’s important they have the right tools at their disposal, and integrated into the workflow, to provide confidence that what is being submitted will not get rejected or needed to be retracted down the road.
?
That’s why as part of its latest feature update, Proofig AI has developed an AI image fabricator identifier, which detects instances of AI-generated microscopy images. The AI Image Fabrication identifier uses advanced algorithms to adapt to new AI models as they emerge allowing users to stay ahead of potential image manipulation threats while providing a smooth and automated process to verify image authenticity.
What to do now?