How much benefit can Large Language Models (LLMs) bring to intelligence analysis?
credit: https://www.ujasusi.com/p/the-criminal-use-of-chatgpt-the-dark

How much benefit can Large Language Models (LLMs) bring to intelligence analysis?

In "Geopolitics of the Infosphere," a book I proudly co-authored with Professor Paolo Savona, the topic of Natural Language Processing (NLP) is also examined.

Today's advances in the field are driven by Deep Learning methods based on neural networks designed to mimic the function of neurons in a human brain and, through Large Language Models (LLMs), provide information from masses of unstructured textual data by expressing themselves in human-like communication.

LLMs learn language patterns, such as the probability that specific words will be followed by certain others in a sentence, using techniques like predicting the following symbol (token) or modeling "masked" language to generate or complete text. Because neural networks are inherently probabilistic, LLMs have been dubbed "stochastic parrots", not encoding a cause-and-effect understanding and relationships between objects (so-called "pragmatic inference").

Although large language models and AI-enabled generative products are plentiful (OpenAI's ChatGPT, Microsoft's Bing chatbot, or Google's BardAI), for operational use (military, law enforcement, intelligence, or diplomatic), it is more advantageous to train a custom LLM, designed specifically on military and government data, ensuring a higher level of domain expertise, accuracy and relevance of the generated text.?

Once trained, the model can be used for intelligence analysis applications, automatic report generation, or natural language interfaces for command and control systems (as in the case of a drone, to analyze - automatically and in real-time - incoming data and generate mission-critical insights).

Precisely, to calibrate LLMs to intelligence analysis needs, a model must be able to:

  • Reliably explain how it arrived at its conclusions, providing verifiable sources for its claims. GPT and other text-based models crudely encode word relationships without understanding semantic meaning. To interrogate the model's "knowledge" requires identifying from what facts it has inferred the information it provides, why it believes those facts, and what evidence supports and contradicts its conclusions.
  • Having mechanisms to update in real-time with new information. The current basic models are trained on a massive corpus over a long period. As a result, the most up-to-date information is locked in at the time of training.
  • Support lateral and counterfactual reasoning. The development of hybrid architectures, such as neurosymbolic networks that combine the statistical inference power of neural networks with the logic and interpretability of symbol processing, offers significant potential.

Naively trusting LLMs exposes analytical rigor to misinformation and so-called "model hallucinations" (fabricating or falsifying information), producing harmful content. As we point out in our book, "Machines are not yet conscious of themselves and cannot judge whether their actions or decisions are correct or logical."

According to the recent scientific literature, it seems entirely possible that within the next decade, even if we do not have human-level general artificial intelligence, we may have systems with levels of "consciousness," thus endowed with senses, embodiment capabilities, and knowledge of the relationship between models of the world and models of the self.?

If so, future efforts to improve LLMs will provide far more significant input in understanding the context of the information being processed rather than merely predicting probabilistically what the next word might be.

Stuart Poole-Robb

“We look at the World differently.”

1 年

I agree, it is highly possible that within the next few years, we will have systems with a level of "consciousness," endowed with senses and knowledge of the relationship between models of the world and models of self, albeit worrying and disturbing.

Fabio Vanorio

Metaverse, AI, Technology for National Security Expert

1 年

I read many articles before writing this post. If anyone is interested in learning more about the topic, I can recommend them.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了