Diaries of Confusion with Generative AI: How to Protect Yourself from LLMs Responses Flooding Manually
Image Credit: Generated by the Microsoft Designer

Diaries of Confusion with Generative AI: How to Protect Yourself from LLMs Responses Flooding Manually

In the 43rd edition of?this newsletter, entitled?“Fail with Generative AI: The Most Probable Instead of the Best Fit,”?it was concluded that due to the associated challenges in terms of?“Technical Complexity, Costs, and Significant Efforts/Time,”?many organizations struggle to design and implement?“Customized Generative AI”?systems that are built upon the?“Best-Fit Foundation Models”?and can turn the?“Enterprise Data Assets”?into?“Actionable Knowledge”?through the?“Fine-Tuning”?and?“Retraival Agmanted Generation - RAG”?strategies. These enterprises usually follow the shortest and easiest path to leverage the power of generative AI by subscribing to the already existing“General-Purpose Large Language Models”?services. The most dangerous effect of this shortest and most straightforward approach is giving a?“False Impression of Ultimate Accuracy”?about the generated inaccurate knowledge from these AI systems. The general-purpose generative AI services usually compose a response containing the“Most Probable Tactics and Strategies,”?this is a typical plan to?“Fail with Generative AI,”?which is a costly failure. These?“Most Probable Tactics and Strategies”?can’t be considered as successful replacements or substitutes for?“Best-fit Probable Tactics and Strategies,”?which can only be found by human intelligence or by human intelligence that is augmented or assisted by?“Customized Generative AI”?systems.

In this edition of the newsletter, the focus will be on how to be able to protect yourself from being the direct victim of these automatically generated responses that contain the?“Most Probable Tactics and Strategies,”?which form?a typical plan to?“Fail with Generative AI.”?Unfortunately, many industrial reports showed that most of the workforce is currently using general purposes and public generative AI services even without taking the minimum level of precautions. However, even if?“Data Privacy and Security Measures”?are implemented, the risks associated with using the?“Most Probable”?outputs by the generative AI cannot be neglected.

Focusing the discussion on the decision makers within the enterprises when they ask for a proposal for any initiative to be taken, nowadays, it became evident that the first resort for their team became to consult these general purposes and public generative AI services. This trend is expected to continue growing in the future. Of course, from the decision-maker’s point of view, this is totally a waste of time. For example, if decision-makers asked for a proposal for a strategic initiative, they would not want to receive a generic response generated by AI. Any one of their team members can generate a response from the generative AI in less than two minutes, but it would lack the human touch and personalized insight that decision-makers are looking for, as well as the required depth of human intelligence that should guide a two-year strategic initiative. It may be catastrophic to base a two-year strategic initiative on a two-minute effort without human intelligence intervention. This intervention is required to best fit probable tactics and strategies.

Those decision-makers can either use a specialized?“AI Detector”?tool that can analyze the received proposals and return the percentage of the?“AI-Generated Text”?or use their?“Intuitive Intelligence”?to detect the AI-generated text. The decision-makers must then determine the best course of action based on the results.?“Manually and Easily,”?and without support from tools, it is easy to detect this AI-generated text. The AI-generated text usually appears too intuitive, describes common sense, covers general cases, has a broad scope, lacks the required depths, or seems to be shallow, especially in specialized fields. Even the formats and the use of the bullet points. With regular exposure to the AI-generated text, it will become easier to catch the formula.?

Hence, and to conclude,?even if?“Data Privacy and Security Measures”?are implemented, the risks associated with using the?“Most Probable”?outputs by the generative AI cannot be neglected.?The decision-makers can?“Manually and Easily,”?and without tools support, detect the "AI-Generated Text" because it usually appears too intuitive, describes common sense, covers the general cases, has a broad scope, lacks the required depths, and seems to be shallow, especially in the specialized fields. Being able to manually detect these AI-generated outputs protects the decision-makers against the received “Flooding of the LLMs Responses” every day and, of course, reduces the "AI-Generated Daily Confusion."?

要查看或添加评论,请登录

Dr. Ahmed S. ELSHEIKH - EDBAs, MBA/MSc的更多文章

社区洞察

其他会员也浏览了