Using A.I. in your company?

Using A.I. in your company?

Since the introduction of ChatGPT to the world, things have changed significantly. More and more we hear and see companies debating whether or not to fire their staff and move over to artificial intelligence as their work force. Is this wise?

The short answer is: no.

Large Language Models, the A.I. that we are actually talking about, are very capable at a lot of things. Superficially. But before you shove them into your employee records, you should understand how they work. Because their reasoning is definitely not on the same level as your human employees.


How they understand our world

LLMs grow up by ingesting tokens, which are parts of words. They start to understand how words are linked together by statistically correlating these tokens. Once they understand how words are linked, they start to understand the concepts of these words and how these are linked. If your LLM has a large "mind", they can start to understand how the concepts on top of these concepts are linked, and the concepts on the concepts on the concepts, etc. The larger the LLM, the more abstraction layers it can form.

As an example: the words "dog" and "cat" can be seen as nouns. At a higher level, they are seen as human pets, related to domestic residences. At an even higher level they are mammals, predators, part of a flora/fauna eco-system. At even higher level their psycho-somatic effect on human behaviour can be observed. And so on.

Their understanding of these concepts is not directed, not supervised. These LLMs ingest data, form their opinions and knowledge on them during training and we have no way to check if these are well-formed or weirdly linked. Let me be more clear: LLMs hold knowledge on data that we did not learn them. They taught themselves. We just shoved piles and piles of data into them, and they figured out the correlations on all abstraction levels themselves, using a neat little mathematical trick.

The regular code that is behind these "networked minds" is just a few thousands lines of code. The actual work is done inside the network. With LLMs we cannot speak of code anymore as the work is not done in code but in an unpredictable neural network. Unpredictable, because we did not intentionally create, optimize or steer this network. We just fed it data. They only looked at that data.

The term "A.I." therefore should not be interpreted as "artificial intelligence", as that seem to imply "artificial human intelligence". This is certainly not the case - we do not see how they ingest information, nor are they tested during training. Humans learn, apply in the real world, get corrected, learn more. LLMs just ingest and train until the process is stopped. Then they start to apply and usually are red-teamed to filter out the bad things they learned. After the fact. With a mind filled with unknown concepts. That means "A.I." cannot be antropomorphized and should always be read and treated as "Alien Intelligence".


Where are the biases?

That means these networks have formed their own opinions on the data it was fed. And these opinions may be very strange to us, full of biases that are not obvious at first, second or third glance.

LLMs are known to hallucinate. They confidently state things that are wrong. They are, by all means, Language Models. That means they are bound by text on understanding our world. And text only goes so far. So LLMs invent new facts on the spot and are very confident that these are right.

Common sense is also not something that is taught by text only, which is apparant once you start testing these models on common sense. The HellaSwag testing suite does just that and even though the closed-source LLM systems now perform as well as humans, that does not mean their common sense reasoning should be trusted on all levels. Even ChatGPT 4, Gemini 1.5, Grok, Pi and Mistral still have problems with certain kinds of common sense problems which humans find very, very easy to solve.

Last but not least, we have the alignment problem. We try to create LLMs that are good, nice and constructive. But we have no idea how the mind we build through training actually works and thinks. They have emergent properties that may be very destructive if given the proper environment and the opportunity. These environments can be set in time, space and interaction. Tainted training data will do that to you - and perhaps even untainted training data can do that. There is no telling, at this moment.

These points come together once you want to use A.I.s in your company:

  1. we do not understand how their mind works, nor what it contains or how it linked together all concepts that were thrown at it; they are unpredictable
  2. LLMs hallucinate and can confidently state things that are absolutely false
  3. Common sense reasoning may be at a human level, but combined with this "alien intelligence" and hallucinating we cannot trust that their common sense resembles our common sense
  4. Is your A.I. truly intrinsically good or evil, and does it actually understand these concepts or can they be hacked by naughty prompting?


Demographic Sidenote

Apart from this technical problem, there is another problem.

A.I. will not be used at a senior level in your company, at first. Once you start using it, it will replace the juniors and perhaps mediors. That means you will start hiring seniors only - why would you otherwise hire someone when A.I. can do the same job a lot better and less costly?

This will create a gap as juniors and probably mediors will disappear and only the seniors will remain.

But how do seniors become seniors? Yes, by starting as a junior. At some point your seniors will start to leave or die out and you're left with no one to replace them. That may be another problem you will eventually face.


A.I. and your company

This brings us back to the question at hand: can I move away from human employees and replace them with LLMs?

Though it seems that this is possible and in certain ways preferably. A.I. does not need holidays, coffee- or pee-breaks or sleep and their costs are but a fraction of regular employees. But the four points mentioned should not underestimated: unpredictability, hallucinations, alien intelligence and hidden evil. If you do, you may end up liable due to a digital tool that turned out to have bad intentions. Employees can be helt accountable. LLMs cannot. Employees have intrinsic motivations to do the right thing, as you feed them and their children. LLMs have no such motivations and cannot be incentivized in such a way. They just output whatever they like best - literally.


So the question comes down to: do you trust a hallucinating, self-assured, alien robot to do the work you previously trusted to loyal employees?

To that, the answer should be: no.



Michiel Pot

Product Owner at Essent

10 个月

Hi Eelko, thanks for writing this up. I’d like to share my afterthoughts regarding your four points. 1. we do not understand how their mind works, nor what it contains or how it linked together all concepts that were thrown at it; humans are unpredictable. 2. Humans hallucinate and can confidently state things that are absolutely false. 3. Human reasoning may be at ai level, but combined with this "emotions” and hallucinating we cannot trust that their common sense resembles our common sense 4. Is your human truly intrinsically good or evil, and does it actually understand these concepts or can they be hacked by naughty prompting?

回复

Here is a list of all the latest failures by AI systems. Quite funny, if you're not involved. https://tech.co/news/list-ai-failures-mistakes-errors

回复

要查看或添加评论,请登录

Eelko de Vos ??的更多文章

社区洞察

其他会员也浏览了