AI - Facts and Myths

AI - Facts and Myths

A lot of information is being presented and discussed around generative AI, which some folks, including other experts, refer to as AI.

What I do find, though, is that while people espouse websites to learn about AI, provide its pluses, or suggest products to use that have AI, they (as a whole) ignore quite a bit of data that, again, as a whole—people who are diving into AI—are unaware of.

For example, any LLM on the planet can produce fake or false information, which the industry calls hallucinations.

Some companies tell customers of their products that, because of guardrails, their product (SaaS) does not produce hallucinations (they refer to this as fake or false information).

Equally, they tell the company that it is 100% accurate because the content it puts into their system is 100% private and, therefore, 100% correct.

The company assumed that the private because it is only their content, would be accurate, as noted above.

This is a falsehood.

Another company hired a consultant who claimed to be an AI expert.

They recommended an LLM but failed to mention the hallucinations, potential AI bias, prompt leaking, and other continuously popping-up issues.

Instead, they told the client it was safe since it was not accessing the Internet.

Surprise - they were wrong.

Even the private content you have put into any platform with an LLM can still produce fake or false information.

Again, it is an inherent flaw of AI.

Let's say the vendor uses a R.AG ("RAG ") and adds guardrails.

There is a probability that it will produce fake or false information.

Now, you can say, "Well, Craig says probability, which doesn't sound like it will ever happen."

Reality, though, says it will happen. Maybe not to you, Fred or Sarah using it, but someone in your company will experience the hallucination.

It doesn't matter whether it is your employee or where they are accessing it; AI doesn't care.

Another somewhat unaware statement is that solutions like Copilot, Gemini, or other AI products used within a browser can produce fake or false information.

The problem is that, overwhelmingly, end users have yet to learn. The idea that they are aware needs to be corrected.

In addition, people will see the references to where this information came from and consider this a verification of accuracy.

Yet, that reference is pulling information from a website, which may not exist, or if it does, can show the wrong information or, as everyone knows, is nowadays full of content marketing and just plain junk.

I once spoke with an artificial intelligence expert who told me that a family member went on the Internet and typed in some plant information, and the AI told them to put X product in there to help them grow.

What happened? It killed the plants.

There have been cases where the AI has told people to put all types of stuff on their pizza, which is harmful.

AI bias is another issue. Something many people do not know.

Strengths and Weaknesses of an LLM?

Yep, there is not one LLM that is perfect.

Thus, I recommend at least two LLMs, to help offset the weaknesses that any of them will present.

Prompt Leaking

Another issue with AI.

The idea that someone with a technical background is pulling this off must be more accurate.

It can be anybody.

The approach works this way: The person repeatedly types several letters or words, breaking through the AI.

The result?

The person can see all types of data the company does not want them to see.

Financial data is just one example.

Token Fees

When an end user uses AI, for example, by typing questions or statements into the prompt window, they see it as just the box that appears. Token fees start.

A token is a character in a word. Using the token calculator, which I recommend, the sentence, "More than 200 people are reading this article," equated to 45 characters.

While the cost of token fees is minuscule, they can add up quickly depending on the number of people asking questions and doing so in a manner that isn't specific right away.

The AI Summit in 2023

I attended the AI summit last year in Amsterdam, and throughout all the sessions, not one speaker or panelist who discussed generative AI mentioned token fees.

When I raised my hand and asked about token fees and their cost impact on companies, only one person—a professor emeritus of Economics from Oxford—noted that the question was correct and said that a risk-averse company should not implement AI.

Mass Amount of Energy

Training AI requires a massive amount of energy.

According to the International Energy Association, "energy consumption from data centers, cryptocurrency, and AI 2026 will be roughly equal to the amount of electricity used in Japan." (IEA, https://bit.ly/4dJoczy)

The water needed to cool down the computers and the data centers isn't a tiny bit.

Shaolei Ren, an associate professor from UC-Riverside, "projects that the water consumption of AI demand globally will equate to four to six Denmark's water usage (withdrawal) by 2027." (S.Ren, Newsweek, https://bit.ly/3NtF8zf)

SLMs can reduce the impact, yet a company should be aware of it.

That may be all they need, or they may need to get at least one LLM.

Job Gain and Job Losses

While many people say AI will create more jobs and then lose jobs, this is a misnomer.

It depends on what type of job one does.

If you are human-facing, then yes, your job is safe.

If you answer the phone for customer service, you are no longer needed as AI improves.

Using an AI, the synthetic voice will sound human.

Today, yes, companies need prompt engineers.

It could be anyone, as long as they have critical thinking skills.

Now?

They need coding skills.

But not all is lost.

Companies need to look beyond today with AI to the future and its impact.

Instead of eyeing upskilling for employees, they should be saying and implementing reskilling.

Let's say an employee is learning skills for a current job role.

However, AI automation will replace that job in a year or two.

If the employee is someone you want to keep around, then focus on reskilling for a new role that will appear due to AI.

Companies, however, are ignoring this (as a whole)

Thus, upskilling, current skills for jobs that, if one looks enough at them, will likely be eliminated due to AI, are the same ones companies focus on.

Clerical jobs will be lost. Specific accounting jobs will no longer be needed.

Productivity with AI

There are companies using AI that see increased productivity because it eliminates all the tasks needed for an employee to do their job.

Think though about an employee who either has a B personality or one, let's admit it - is a slacker.

Why complete the remaining tasks when AI could do it for them?

As for the productivity boost, what happens when a person who is not told or aware of the potential pitfalls of AI, thus not checking (a human element is crucial here) to see if it is correct, sends it to whomever?

Their boss needs to be made aware of the pitfalls or potential issues.??

The boss then presents it, and the company's senior executive, or the CEO, thinks it is correct.

In today's real world, employees use ChatGPT (the free version) without telling their managers.

In one case, an executive told me that an employee provided materials they were asked to complete, failing to mention to the executive that they used ChatGPT.

Unaware of AI usage and potential issues, the executive accepted it as accurate.

I know some executives using ChatGPT and are unaware of fake or false information, such as AI bias.

How would they know?

The idea that people know is wrong. People as a whole need to read the latest around AI.

Nor should a company assume they are.

Profitability with AI today?

There is not one company out there - that is profitable—not one.

Microsoft, Amazon, and Google can handle those losses.

On the other hand, there is Open AI.

Their valuation is 157 billion dollars (USD).

Open AI projects a loss (2024) of five billion dollars, and it says it is not expecting to be profitable until 2029. (Information Report, Information, https://bit.ly/3BYV4Ha)

Companies pushing out AI products/systems

Many companies that have developed AI products and systems will fail and disappear.

This is due to the number of products flooding the market and the lack of profitability that many of us will find similar to the days of the dot.com era.

For those of us who were part of that experience (including myself), we all witnessed the pluses—getting capital—and the minuses—many companies failing and closing up shop or being acquired.

The job losses were tremendous.

Is this first stage of AI going to see the same implosion, where plenty of companies will go out of business, and others will survive?

To me, I see that coming.

Anti-AI?

Please don't assume that I am anti-AI because I am not.

Rather, I am a huge supporter of it, and there are possibilities for where it can go.

AI will do wonders in the medical field, pharma, and other places where it can make a substantial positive difference.

It will help companies, regardless of size, too.

But, today, AI is at the flea, infantile stage.

Just remember that, and you should be ok.

At least for now.





要查看或添加评论,请登录

Craig Weiss的更多文章

  • Top 10 Features you want in your Learning System

    Top 10 Features you want in your Learning System

    By Craig Weiss, CEO for The Craig Weiss Group, and FindAnLMS. I see it all the time.

    4 条评论
  • Top 7 LXPs for 2019

    Top 7 LXPs for 2019

    There are folks who have not yet signed up for my bi-monthly newsletter. As such, some readers have asked if the post…

    6 条评论
  • ILT and its impact on online course design

    ILT and its impact on online course design

    There are clearly some concerns on my part when it comes to creating excellent online courses. Regardless if you go…

    1 条评论
  • WBT courses - Returning can be done, here's why it is not

    WBT courses - Returning can be done, here's why it is not

    When did we lose track of its# value of course, and its objectives and impact on learning? There are plenty of people…

  • #LMS Decision Making - The Factors

    #LMS Decision Making - The Factors

    If you are like me, you have probably seen dozens of posts or articles out there telling you here is “how to pick an…

    2 条评论
  • LMS Decision Making Before you Buy

    LMS Decision Making Before you Buy

    If you are like me, you have probably seen dozens of posts or articles out there telling you here is “how to pick an…

    3 条评论

社区洞察

其他会员也浏览了