AI Sand Castle?Trap
?vaklove

AI Sand Castle?Trap

This post will start as a very boring talk about science, but try to get through the first few lines, we will get into crazy science.


We live in great times where science is making one breakthrough after another. This is especially true now, when we have AI at our disposal, which creates limitless opportunities.


Recently I was enchanted by the following studies, courtesy of Google Scholar:


The one thing these articles have in common is the use of “vegetative electron microscopy” during the research. Most likely you are not an expert on the latest in electron microscopy. Neither am I. Fortunately, I am a modern person and when in doubt I ask my trusted companion ChatGPT to provide me an answer.


I asked for a detailed explanation which after a long page of text ended with: “Vegetative electron microscopy is a powerful tool in multiple scientific fields, allowing researchers to explore the fine details of vegetative cells, their subcellular structures, and interactions with the environment. It plays a crucial role in advancing plant science, microbiology, mycology, biotechnology, and ecology, ultimately contributing to innovations in medicine, agriculture, and materials science.


But before you stop reading and go to your next party where you casually drop the term “vegetative electron microscopy” while nibbling on vegetables from a party tray and wondering where the pigs in a blanket are, you should also check the next set of scientific papers.



What’s so special about these two papers?


They, too, have in common the use of “vegetative electron microscopy”, but on top of that, the first paper was retracted!


The retraction note for Photodegradation of ibuprofen laden-wastewater using sea-mud catalyst/H2O2 system: evaluation of sonication modes and energy consumption reads: “The Publisher has retracted this article in agreement with the Editor-in-Chief. An investigation by the publisher found a number of articles, including this one, with a number of concerns, including but not limited to compromised peer review process, inappropriate or irrelevant references, containing nonstandard phrases or not being in scope of the journal. Based on the investigation’s findings the publisher, in consultation with the Editor-in-Chief therefore no longer has confidence in the results and conclusions of this article.


The second paper had to be corrected, with this note added: “In addition, two phrases in the original publication were not appropriate. The authors would like to change ‘vegetative electron microscopy’ to ‘scanning electron microscopy’ and ‘extracellular cells’ to ‘extracellular membrane’.


And this is where we have a problem, Houston.


A quick search on Google for “vegetative electron microscopy” reveals a different set of articles:


Pick your explanation?—?either a mistaken translation from Farsi or a badly scanned PDF document. In either case, this information was used to train AI?—?ChatGPT to be specific?—?and AI is now happily not only repeating this but is also providing a definition for it:

“‘Vegetative Electron Microscopy’ refers to the application of electron microscopy (EM) techniques to study vegetative cells, tissues, or organisms at the ultrastructural level. The term “vegetative” in this context typically refers to actively growing, non-reproductive cells in plants, fungi, bacteria, or other microorganisms. Unlike spores or reproductive structures, vegetative cells are metabolically active and often the primary agents of growth, photosynthesis, and nutrient assimilation.”


What’s worse, it is now used in scientific papers and in turn, these papers are cited by others. All these papers are available on the Internet and they were harvested by all the makers of AI and now every model contains their contents. And that’s the biggest problem of all. When we talk about AI ethics, the problem of human-in-the-loop is often discussed, frequently in relation to weapons killing people at its own discretion.


But with “vegetative electron microscope”, we have an example where people removed themself from the loop with now far reaching consequences. What would be limited damage to a handful of people studying ‘ibuprofen laden-wastewater’ in the past, is now built into technology presented as ‘Artificial Intelligence’ and promoted to be used by everyone.


Unlike Google search, where I got all the above information, when I ask ChatGPT, I am getting an authoritative answer which sounds good but it is completely wrong. Despite the fact that one of the articles was retracted and the other corrected, that won’t change the AI model. And this is just one tiny example of the danger of AI. It removes us from the source of the information. To keep repeating the fact that we don’t know how to make the models forgetincorrect information seems redundant. It would appear that no AI company cares about that detail.


ChatGPT is a Large Language Model. Nothing less, nothing more. It is not a knowledge model. Using it for learning, you are building a sand castle.


Despite the promise of AI making our lives easier, new patterns are emerging. With AI?—?a) it will take more energy to learn new things accurately, b) it will take less energy to become stupid.

Christopher Hayes

Analytics Leader | Analytics & Data Strategy | Data Science | Certified Chief Data & AI Officer - Carnegie Mellon

1 天前

LLMs come with the risk of spreading misinformation, but there are ways to reduce that—fact-checking, retrieval-augmented generation, and human oversight, to name a few. That said, if we have to put in all these safeguards just to trust the output, is it really saving time? In some cases, doing the research yourself might just be the better option. The real value of LLMs isn’t in replacing critical thinking, but in helping us ask better questions and get to answers faster.....Lots to ponder here, great post Vaclav Vincalek

always a fun, and informative article. Another favorite test I have is to look for "baby peacock" (or correctly termed peachicks) on Google. Ai image generators have valiantly and vigorously created images of miniaturized peacocks, which now are available as search results. LLMs gobble up those ai hallucinations, and then it becomes "fact".

Feite Kraay

Author | Speaker | Ecosystem and Channel Sales Leader | IBM Champion | Quantum Enthusiast

1 天前

We become stupid in two ways: One, believing the AI-spouted nonsense and two, just losing the cognitive ability to think and reason for ourselves. Microsoft has published a paper on that second point, not that it has slowed them down one bit in pushing Clippy Copilot into everything. I'd like to advocate for an AI "Kill switch" to be added to all the productivity apps including MS Office, Teams, Adobe Acrobat, Zoom, all of it. I don't want unsolicited crappy AI summaries, I want to engage with an AI of my choice, on my own terms, and only when I feel like it. Is that too much to ask for?

回复
Ron Newell

Business Development

1 天前

Vaclav. Thank you for what could be a very boring conversation into a somewhat stimulating conversation and helping us better understanding of how we should watch for misinformation wrapped up in a AI report.

要查看或添加评论,请登录

Vaclav Vincalek的更多文章

  • Goodnight, Skype

    Goodnight, Skype

    And that wraps up the Skype journey. May 5th, 2025 will be the last time you’ll be able to use Skype.

    1 条评论
  • The House of?Alexa

    The House of?Alexa

    The wait is over. Amazon introduced Alexa+.

    1 条评论
  • Big Bada Boom, Christmas 2032?

    Big Bada Boom, Christmas 2032?

    There is a chance that Earth will get hit by an asteroid on Dec. 22, 2032.

    2 条评论
  • Robots. The next wave is coming

    Robots. The next wave is coming

    My dear reader, by now, you might be tired of reading another write up about AI. The promises of the imminent arrival…

    4 条评论
  • When the AI rubber hits the?road

    When the AI rubber hits the?road

    Large Language Models (LLMs) have stormed the front pages of mass media, thanks to OpenAI and its now famous ChatGPT…

    10 条评论
  • DeepSeek hysteria

    DeepSeek hysteria

    One of the advantages of writing a weekly newsletter is that you don’t have to react immediately to any breaking news…

    7 条评论
  • The pitfalls of AI?search

    The pitfalls of AI?search

    Before we resume our regular programming, I have to issue an apology to you, my dear reader. I have been misled and in…

    2 条评论
  • AI. In search of value, in search of?price

    AI. In search of value, in search of?price

    Now that we are on our way to spending billions of dollars on AI, the question of making at least some of the money…

  • The Face-AI-book

    The Face-AI-book

    I wanted to write something this week about the continuous moral decline of Facebook. But then I found something…

    1 条评论
  • Agentic madness

    Agentic madness

    The good old year 2024, when we had to deal with Software with a Soul or Agentic AI, or when we had to contemplate the…

    2 条评论