Synthetic vs. Authentic General Artificial Intelligence: How Star Trek Handled it
Star Tek TNG - Commander Data & Professor Moriarty

Synthetic vs. Authentic General Artificial Intelligence: How Star Trek Handled it

Life imitates art or is it vice versa or is something else imitating all of what we perceive to be reality? When we tackle the topic of Artificial General Intelligence (AGI), it can get very philosophical quite quickly. There are lots of reasons for that, not the least of which is a large and growing body of excellent fiction (going back to the 1800s) dedicated to addressing this very topic. Before that? fiction began, there were variations of the topic inherent in a variety of crucial philosophical themes and debates and those go back thousands of years. Whether we’re talking about the Mind versus Body question, Boltzmann Brains, Descartes with his ‘cogito ergo sum,’ or Star Trek we tend to find direct parallels to the questions now arising in regard to the emergence of Artificial General Intelligence (AGI).

So, we’ve been talking about this issue basically forever, but now – quite unexpectedly (because most of us thought it would take longer) – we’ve found ourselves close to reaching the ‘event horizon’ (pardon the mixed metaphor) of mankind’s entrance to the black hole that is this issue. Perhaps it isn’t such a mixed metaphor after all though, given that the term “Singularity” now also applies to an AGI reaching some sort of superintelligence boundary (which hasn’t really been defined yet – see my previous article).

Star Trek & AGI

What does any of this have to do with Star Trek and in particular Star Trek, The Next Generation? Star Trek, TNR came out in 1987 and ran through 1994, eventually spinning off several more series that ran through the 1990s. 1987 is now 37 years in the past; and there may be a number of folks who’ve never seen the show and there certainly has been a lot of very good science fiction that has addressed AGI since it came out (including many other Star Trek series), so why focus on this particular show to highlight the question of whether there is a difference between Synthetic intelligence and Authentic artificial intelligence?

As with any good show (or book), it comes down to the characters and the plots of course. TNG chose to build the question of AGI into the entire series through the character of Commander Data (an Android). At first, it just seemed as though this was a more direct route to perpetuate the rather successful recurring theme which helped drive the original Star Trek series – which was Spock’s drive to become more human. Spock of course wasn’t a computer, just an alien who behaved like one. Thus, the introduction of Commander Data (in the same role Spock had been given, Science Officer, which might be equivalent to Chief Data Scientist these days) allowed for the topic of artificial sentience to be addressed directly and over time (e.g. over the 8 years or so of the series). So, throughout the entire series, Data’s subplot progressed as he became ever more human. This gave the writers many opportunities to tackle the question of AGI from a number of different perspectives through this one character and even come to some interesting conclusions. This of course wasn’t the only character or plot mechanism they employed to address AGI. We’ll come back to that in a minute.

Star Trek as Driver for Cultural & Technical Change

Before we go any further, it’s worthwhile to step back and consider why talking about a 30 year television show isn’t just some strange exercise in trivia. This relates to the opening statement in the article – Life imitating Art. Across the history of Science Fiction, that’s happened a lot; including everything from Arthur C. Clarke inventing Geosynchronous Satellites to predictions of moon landings, computers and much more. Star NG was a particularly rich source of inspiration for the innovative minds of the 80’s, 90’s and 2000’s. Here’s a brief list of some of the technology TNG inspired:

  1. Tablets and Smartphones
  2. Augmented Reality environments (Holodecks)
  3. UX design principles
  4. 3D printers (a step towards replicators – both original series and NG)
  5. Universal Translators (both original series and NG)

The character of Commander Data represents another one of those technology innovations – even as we speak, dozens of companies are pursuing (or planning to pursue) the use of humanoid robots controlled by AI chips. Admittedly, we’re at the beginning of that particular effort, but it is underway. And all of this demonstrates clearly – as if we needed to be reminded – that humans imagine a thing first before we build it. It is in fact, part of the creative process of innovation. Science Fiction then often provides us with a collective – cultural – imagination, which in turn can spawn many individual interpretations of the common theme actualized into reality.

BTW – if you’re still not convinced that a show like Star Trek can change our reality in the present, just take a look at the US Space Force logo side by side with the (Star Trek) Federation logo. ??

This makes me wonder, what if Space Force had an academy…

What is Simulated Intelligence?

Now that we’ve established that Star Trek NG is actually a valid place to examine society’s thinking on complex technological topics and AGI in particular, let’s take a look at how they handled the question of Synthetic Intelligence. Before we define what this actually means though, it’s worth noting that in Star Trek (and in many other good sci fi works), AGI was viewed as many different things that existed on different levels. The TNG series itself began using the plot of a hostile encounter with a god-like superintelligence – Q. Q is neither human nor machine – nor do we actually know what he is and where he came from – it seems beyond human comprehension. On the other end of the scale is the ship’s computer which employs the voice of a somewhat whimsical character that serves as a ‘cheap-laughs’ plot device. Data falls somewhere in between these levels. There is another character though that is introduced through two episodes several years apart. He is Moriarty, a Holodeck version of Sherlock Holmes’ arch-nemesis. We’re going to focus on the dichotomy between Data’s character and Moriarty.

Coming back to the definition – what is “Synthetic Intelligence.” There isn’t any standard industry definition for this. I will take a stab at defining it in the context of AGI.

Synthetic Intelligence represents a level of AGI wherein the entity in question possesses human-like characteristics and abilities, but is still limited in some fashion. In other words, due to the nature of its architecture or training it evolves to the point of matching or surpassing humans in most or all categories, but lacks some elements of human-like cognition. This limitation can be accidental or deliberately built in. We might assign this an acronym of SGI – Synthetic General Intelligence. (apologies to Stargate fans for any perceived similarity).

This definition begs the immediate question; however, as to whether the distinction even matters at all. As noted in some current industry thinking (for example from the DeepMind AGI taxonomy paper), AGI capabilities are the only important consideration (as opposed to the cognitive processes that got it there).

Interestingly, Star Trek NG addressed this exact question using the Moriarty character. In that plot, Moriarty is 1) given humanlike AGI capabilities, 2) seems to be sentient and conscious, but 3) is also given severe architectural constraints – he cannot leave the holodeck and exist as “real matter.” Given that all this action takes place in a simulated reality (the Enterprise Holodeck), with a simulation of a fictional character from Sir Arthur Conan Doyle’s fictional works (Sherlock Holmes) we’re given several constraints or limitations to the AGI from the get-go. The TNG writers have clearly fashioned Moriarty as a counter-point to Data’s character. They are in fact positing that Moriarty is a Synthetic (General) Intelligence. Here’s how that plays out:

  1. While Moriarty does try to escape his situation, he ultimately can’t – he is outfoxed and locked up, again (not even knowing that he has been fooled).
  2. While he is portrayed with some sympathy, he is also viewed by the humans (and even Data) as a menacing character. This is partly because he was “written that way,” but it’s deeper than that – in that the simulated Holodeck character that was written as a villain is the only one that ever transcends its bounds and tries to make a break for it. Moriarty is the menacing AI.
  3. No one in Star Fleet can figure out why he became sentient (a common sci fi theme) or what to do with him. When asked how he did it; Moriarity states that the character was created with the express intent of being able to defeat Commander Data as an opponent. (sounds familiar in relation to GANs, doesn’t it). Ultimately, Moriarty – as an entity - isn’t considered suitable (yet) to be granted full rights.
  4. Moriarty is not placed on a path where his character will continue to grow and eventually achieve full autonomy – instead he is contained in a memory device (never to be released again).
  5. Also worth noting here is the implication of architecture as a deciding factor in how to view an AGI. The Moriarty character lives in virtual memory (in a computer, the cloud or device), that is not tied to specific anthropomorphic form or necessarily limited in relation to resources. In other words, he is the limitless CPU, Cloud-based AGI model currently being pursued near-term.

This is where Professor Moriarty ends up for an eternity (for him)

The writers from TNG made a very clear choice in how to perceive or quantify a Synthetic intelligence versus an Authentic AGI. This Synthetic General Intelligence became the cautionary AI tale on TNG. Let’s look at how that situation contrasts with the depiction of Commander Data.

Commander Data & Authentic Humanlike General Intelligence ?

The Data character starts out very robotlike at the onset of the series. There is no question that Data is a non-human entity and at this point may or may not count as a Synthetic intelligence. But Commander Data doesn’t stay that way. Over the years of the series, in dozens of episodes, his character grows more and more human in nature. This involves not just adapting to gestures, humor and cultural awareness but to emotions (which at one point gets handled by a special chip, which implies a defined architecture in contrast to the mystery of Moriarty). By the end of the series, Data is viewed as having the same rights as every other crew member on the ship. He is accepted (despite some challenges, which involve court cases, etc.) and continues to develop in what is viewed as a non-threatening evolution. There is of course an evil twin Data, but that isn’t particularly meaningful to this discussion. Let’s look at why Star Trek TNG decided how to perceive Commander Data differently from Moriarty:

  1. Data is created in humanoid form – he is an anthropomorphic AGI.
  2. Data is guided the entire time in his evolution by humans (his learning process is not “unsupervised,” it’s more collaborative in nature).
  3. Data does not have access to unlimited resources; his AGI “architecture” limits him by nature from potentially going the singularity route. Neither is he motivated to go out and obtain those limitless resources (thus he exercises self-control in regard to reaching singularity).
  4. Data does not seem as constrained by his “initial conditions,” in other words he was created not to be a character or to do one thing, but rather to be a real AGI from day 1. This did not seem the case for Moriarty who could accurately be classified as an ‘accidental AGI’ which might be considered analogous somewhat to what’s happening now with multi-model LLMs.
  5. Data had empathy, Moriarty did not. This seems as though it’s a fine point, but ultimately seems to be the deciding factor as to why the one character (or AGI) was accepted and the other wasn’t.

Back to the Real World

What does any of this exploration into Science Fiction have to do with the current debates surrounding Artificial General Intelligence? Is it pertinent to that debate? I think it is. While many and perhaps most of us in the IT field assumed that we’d be having these debates on AGI some day in the future, few of us believed that it was going to happen near-term (say in the next 10 years or more). We have all of a sudden with the introduction of ChatGPT and about 100 other Generative (Gen)AI products over the past year been thrust into what represents a new situation (see AI revolution article). This new reality for AI (which includes some capabilities we thought would only be associated with AGI) is also forcing us to reexamine our expectations and timelines associated with AGI. Nearly every large organization on the planet is invested one way or the other with making AGI happen in the near future. From Open AI and Sam Altman’s $7 trillion funding pitch to every major nation-state staking claims to be the leader in AI tech, the race for AGI is mostly definitely on.

Now that we’re here (whether we had anticipated being here now or had been looking forward to it or not), we must face the questions associated with AGI in a real-world rather than fictional context. But, could that mountain of fiction related to AI and AGI be useful as something other than entertainment or philosophical musings? When it comes to the questions surrounding the definition, classification and governance of AGI, I think it certainly could be useful. The starting point for exploiting those concepts and associated debate is the question of whether or not human-like AGI can be subdivided into ‘synthetic’ (simulated) versus ‘authentic’ categories. ???

If we were to follow the premise laid out by Star Trek TNG when designating and developing new AGI capability, it might imply the following principles or guidelines:

  1. Synthetic AGI is intelligence which should include more defined boundaries (both in terms of resources and oversight).
  2. Authentic AGI is something that is intended to more closely mimic human intelligence and development. It is thus less focused on built-in boundaries and more focused on guided (as opposed to unsupervised) evolution. It is also more likely to follow an architecture that more closely approximates human biological intelligence.
  3. Authentic AGI is built with resource limitations from the beginning, which would allow it to achieve true autonomy with less risk. Use of Synthetic AGIs for autonomous functions implies higher risk.
  4. Synthetic intelligence will likely not work the same way that human cognition does, which has the potential for restricting its use cases. Authentic AGI would not have these constraints; it could be applied to any task that a human could handle (plus of course many that humans could not).
  5. Neither AGI type ought to be allowed to evolve to singularity – but each type will have different mechanisms for exercising this restriction.
  6. The immediate context for these principles is the development activities associated with trying to achieve AGI. While some vendors are now claiming that they’ve already made that breakthrough using Large Language Models (LLMs), they really haven’t. LLMs have of course proven more successful than originally anticipated, but they don’t represent a general intelligence even in the narrow sense of the term. The immediate focus on bridging the gap between the current crop of LLMs and some AGI is a ‘multi-modal’ approach which essentially represents brute force conglomerations of different existing models and massive amounts of infrastructure (hence the $7 trillion funding estimate from Mr. Altman). It is likely that on the current trajectory, we’re going to reach a form of Synthetic / SGI intelligence first. When that is going to occur is hard to predict, but certainly within the next 10 years seems quite feasible now. If we chose to apply the AGI principles gleaned from our TNG example, then it would seem to imply some rather significant design patterns and governance expectations for AGI that clearly have not been advocated or adopted yet (let alone enforced).

Regardless of the examples or metaphors we choose to use as our basis for making such decisions, we’re now at the point where such decisions need to be made. This is not something that can be left entirely to “the market” to sort out. There needs to be a more deliberate process for guiding AI evolution into AGI. AGI will likely be the defining characteristic of civilization within 20 years or so – and we need to figure all of this out well before that comes to pass if we wish to have any chance at influencing how it turns out. ?

Reference: Article Series on AI

Reference: Article Series on AGI


Copyright 2024, Stephen Lahanas ??


Deep dive into the philosophical rabbit hole! ?? #AI #AGI #ArtificialIntelligence

回复

Absolutely fascinating topic! It's intriguing how discussions about Artificial General Intelligence often intersect with philosophy and literature, reflecting humanity's long-standing curiosity about the nature of reality. Can't wait to delve deeper into these parallels and explore the future of AGI!

回复
Manmeet Singh Bhatti

Founder Director @Advance Engineers | Zillion Telesoft | FarmFresh4You |Author | TEDx Speaker |Life Coach | Farmer

7 个月

It's fascinating how art and reality intertwine when discussing Artificial General Intelligence (AGI). The philosophical depth is unparalleled. ??

回复

Fascinating connections between fiction and reality! The topic of AGI is indeed thought-provoking. ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了