To BOT or Not to BOT? That is the Question.  Keep Rocking LinkedIn #11
Keep Rocking LinkedIn Newsletter #11 Kevin D. Turner, TNT Brand Strategist, LinkedIn Expert, LinkedIn Profiles, LinkedIn Optimization, Personal Branding, NEWLinkedInFeature #KeepRockingLinkedIn

To BOT or Not to BOT? That is the Question. Keep Rocking LinkedIn #11

Generative Artificial Intelligence (GAI) tools, like Large Language Models (LLM) ChatGPT, Bard, & New Bing CoPilot, are undergoing rapid advances, have now entered the mainstream, and are not going away, so we all better understand and help to shape them and our future. As these GAI tools have the potential to impact human lives significantly, both for good and potentially bad, it is essential to prioritize society's and humanity's interests in the development of these tools. We shouldn't leave our futures up to for-profit companies to decide.

Many on LinkedIn are using these GAI Tools to create their content, comments, and even contributing to their branding, without fully understanding what they are working with because it makes their immediate life easy. This blind support, based in personal profit, has the potential to fuel wonton corporate behavior and Personal Blanding??opportunities.

As a member of OpenAI's beta testers in 2022, I got early access to ChatGPT (Chat Generative Pre-trained Transformer) and I experienced early on both its value and shortcomings before the hype hit. I believe these tools can be a Muse, a creative stimulus but have yet to earn the right to be relied upon as a de-facto solution.?Authentic Intelligence is still the most valuable AI.

Over the last few weeks, a lot has happened in this arena, and I wanted to share some important developments to open up the conversation:

OpenAI's CEO ANNOUNCES HE IS A LITTLE BIT SCARED:

Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing artificial intelligence application ChatGPT has warned that the technology comes with real dangers but can also be "the greatest technology humanity has yet developed" to drastically improve our lives.

"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyber-attacks."

In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity.

On April 30th, Altman acknowledged that the latest version, GPT-4, uses deductive reasoning rather than memorization, a process that can lead to bizarre responses.

He said, "The thing that I try to caution people the most is what we call the 'hallucinations problem" as "The model will confidently state things as if they were facts that are entirely made up."

Altman said relying on the system as a primary source of accurate information "is something you should not use it for," and encourages users to double-check the program's results.

?But a consistent issue with AI language models like ChatGPT, according to Altman, is misinformation: The program can give users factually inaccurate information.

"The right way to think of the models that we create is a reasoning engine, not a fact database," Altman said

"We've got to be careful here," Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

?

SO WHAT IS OpenAI TRYING TO FIX:

According to their latest published paper, here are the specific risks OpenAI is trying to solve: Hallucinations, Harmful content, Harms of representation, allocation, and quality of service, Disinformation and influence operations, Proliferation of conventional and unconventional weapons, Privacy, Cybersecurity, Potential for risky emergent behaviors, Interactions with other systems, Economic impacts, Acceleration, & Overreliance.

Read the linked PDF below if you want to understand these issues identified by OpenAI and help find resolutions:

A significant focus for OpenAI and many other AI developers, is improving factual accuracy is and they're making progress. By leveraging user feedback on ChatGPT outputs flagged as incorrect as a primary data source, they have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5. That still doesn't tell us how inaccurate it was, just that it has been improved.

Read this OpenAI paper to understand what they are doing to ensure the systems going forward are built, deployed, and used safely:


MICROSOFT'S NEW BING, YOUR AI-powered COPILOT CHALLENGES ITS USERS:

Last week the new Bing was convinced that the year was 2022 and not 2023 and was challenging users by saying things like "Please don't doubt me" and "I'm Bing, I know the date." Microsoft noticed that the new Bing would only misbehave when the conversations continued for too long, like a bored teenager.

So this week, the new Bing is restricted to answering only five questions per session and 50 questions daily. Meaning that you can ask Bing only 5 questions on the same topic before you must switch topics.?


MICROSOFT FIRES ITS AI ETHICS TEAM:

In March, Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of the layoff of 10K+ employees.?This is concerning as MS owns a large percentage or OpenAI' ChatGPT and LinkedIn.


EUROPOL WARNS OF THE DARK SIDE OF LANGUAGE MODELS AND HOW CRIMINALS ARE USING THEM:?

March 27th, 2023: Europol's Innovation Lab, often working side-by-side with other law enforcement agencies and operational partners, presents analyses of new technologies that could affect the work of law enforcement, releases its report: The dark side of Large Language Models

This report aims to raise awareness about the potential misuse of LLMs, open a dialogue with Artificial Intelligence (AI) companies to help them build better safeguards, and promote the development of safe and trustworthy AI systems.?

As LLM's capabilities, like ChatGPT, are actively being improved, criminals' potential exploitation of these types of AI systems provides a grim outlook.?

The following three crime areas are amongst the many areas of concern identified by Europol's experts:?

  • Fraud and social engineering:?ChatGPT's ability to draft highly realistic text makes it a valuable tool for phishing, as LLM's ability to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation:?ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation, as it allows users to generate and spread messages reflecting a specific narrative with little effort.
  • Cybercrime:?In addition to generating human-like language, ChatGPT can produce code in several different programming languages. This is an invaluable resource for a potential criminal with little technical knowledge to make malicious code.?

As technology progresses and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.?

Download the Report:

?

WHAT ChatGPT-4 HAS TO SAY ABOUT AI'S DARK SIDE:?

If you ask ChatGPT-4 or 3, and I did, what are the worst effects on humans could be by using AI, it might and has answered:?

There are several potential negative impacts that AI, including language models such as myself, could have on society. Here are a few possibilities:

Dependence:?If people come to rely too heavily on AI systems, they may lose the ability to make decisions and think critically for themselves.

Decrease in human interaction:?Increased use of AI systems could lead to decreased human interaction, negatively affecting mental health and well-being.

Bias and discrimination:?AI systems trained on open-sourced, biased data will perpetuate and amplify societal biases in their decision-making.

Lack of Transparency:?AI systems can be challenging to understand, which could make it hard for people to know how they make decisions and to hold AI systems accountable.

Security risks:?AI systems can be vulnerable to cyber-attacks, which could compromise sensitive data and cause other problems.

Privacy concerns:?The collection and use of personal data by AI systems could raise privacy concerns and lead to the abuse of personal information.

Lack of regulation:?The rapid development of AI technology may outpace the ability of society to regulate it, which could lead to negative consequences.

Lack of public understanding:?If people don't understand how AI systems work, they may be less likely to trust or use them effectively.

Job displacement:?As AI advances, it may be able to automate many tasks currently done by humans, which could result in job loss.

Ethical dilemmas:?AI systems may be used, in ways that raise ethical questions, such as in military or surveillance applications.

?

CAN WE REGULATE AI:?

Regulating AI and AGI is a complex issue requiring careful consideration from various stakeholders. Here are some potential approaches:

Collaborative efforts between industry and government: To ensure that AI and AGI are developed safely and responsibly, there must be a partnership between the tech industry and governments. This would involve establishing regulatory bodies to provide oversight and guidance on developing AI and AGI.

Transparency:?AI systems should be designed to make their decision-making processes transparent and understandable to humans. We can achieve Transparency by explaining how algorithms arrive at their decisions.

Ethical guidelines:?Developing ethical guidelines for AI and AGI would provide a framework for developers and users to ensure that the technology is used responsibly. These guidelines could include principles such as fairness, accountability, and Transparency.

Regular audits:?To ensure that AI and AGI are being developed and used responsibly, regular audits should be conducted by independent third parties. These audits could assess the impact of the technology on society and identify any potential risks or ethical concerns.

Education:?As AI and AGI become more widespread, educating the public about the technology and its potential impacts on society is essential. This includes providing information about how AI works, its limitations, and the ethical considerations that need to be taken into account.

Overall, regulating AI and AGI will require a multi-faceted approach that involves collaboration between industry, government, and other stakeholders. The goal should be to ensure that the technology is developed and used to benefit society as a whole while minimizing any potential risks or negative consequences.

?

CAN WE SHAPE AI:?

There are several ways in which we can all contribute to the development of AI and AGI:

Learning:?One of the best ways to contribute is to learn about the technology and its potential. Understanding the capabilities and limitations of AI and AGI can help individuals identify areas where these technologies can be applied and developed further.

Research:?Investing in research and development activities is another way to contribute. Individuals can work on projects that focus on developing new AI algorithms, improving existing algorithms, and exploring new applications for AI.

Collaboration:?Collaboration is critical to advancing AI and AGI. Individuals can join existing projects or start new ones, work with teams of researchers, share knowledge and resources, and participate in online communities.

Ethics:?As AI and AGI become more advanced and pervasive, it is essential to consider their ethical implications. Individuals can contribute to the development of ethical frameworks and guidelines for the use of AI and AGI.

Advocacy:?Finally, individuals can contribute by advocating for the responsible development and use of AI and AGI. This can involve educating others about the technology, promoting ethical principles, and engaging with policymakers and industry leaders to ensure that AI and AGI are developed to benefit society.

?

IF YOU FEEL AI DEVELOPMENT SHOULD SLOW DOWN:

Read the Future of Life's Open Letter to Pause Giant AI Experiments, a "call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.?

Decide if you should sign it with so many other respected thought leaders, including:?Elon Musk, CEO of SpaceX, Tesla & Twitter, Steve Wozniak, Co-founder, Apple, Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020. See the fill list on their site & yes, I signed.?

FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments:?


NOW: ON A LIGHTER NOTE: CHECK OUT:

To BOT or Not To BOT? That is the Question!?

Whether 'tis nobler to create content or suffer?

The dings and narrows of outrageous algorithms,?

Or to take Arms against a Sea of AI,?

Gillian Whitney & me, Kevin D. Turner explore how much Generative AI ChapGPT is too much on LinkedIn.?


No alt text provided for this image

If you want to stay ahead of the LinkedIn Curve:

? Ring My ???LinkedIn.com/In/President

? Sign up for this ROCK[In] Newsletter?lnkd.in/dNpAqTRV

? Watch the Videos & Subscribe?youtube.com/@KeepRockingLinkedIn

? Follow these 3 Hashtags??#NEWLinkedInFeature?#KeepRockingLinkedIn?&?#TNTBrandStrategist

? Then check out these resources:

???????????45+ [In] Videos on youtube.com/@KeepRockingLinkedIn

???????????All NEW LinkedIn Features 2023: lnkd.in/eikiHv2B

???????????The LinkedIn Timeline 2003 to 2019?lnkd.in/e2_PdMSw

???????????100+ NEW LinkedIn Features 2022??lnkd.in/eYhzjFjE

???????????40+?NEW LinkedIn Features 2021?lnkd.in/dsW-Zan

???????????50+?NEW LinkedIn Features 2020?lnkd.in/gP5xGb6

???????????15+ HIDDEN LINKEDIN RESOURCES:?lnkd.in/dHh2xsa

If you found this Newsletter, Videos & Resource of value, please?[Comment],?[React]?&?[Repost].

No alt text provided for this image

Since 2005,?Kevin D. Turner?&?TNT Brand Strategist LLC?has optimized 4800+ Profiles and Company pages with access to internal tools to increase rankings, drive Recruiter contact, generate 24/7 exposure, and accelerate transitions to dream careers. 50%+ of our Clients get recruited for their dream opportunities shortly after their LinkedIn Profile Optimization.

???????????200+ Reviews:?Lnkd.In/e4m4WdZQ

???????????5-Star Ratings:?Lnkd.In/gHsr_Sk9

???????????Services:?Lnkd.In/eibnfq6e

Please let us know if we can help you eliminate your Organizational or Personal Blanding?

#KeepRockingLinkedIn!

Kevin D. Turner?@?TNT Brand Strategist LLC

No alt text provided for this image
No alt text provided for this image

#AI #ArtificialIntelligence #AuthenticIntelligence #LinkedIn #ChatGPT #NewBing #Bard #ContentCreators #NEWLinkedInFeature

Lynnaire Johnston

LinkedIn?? profile writer, strategist & content creator, & trainer ?? Link?Ability members' community – learn how to use the power of LinkedIn?? to achieve your professional goals. ?? Gardening fan

1 年

Crikey! This is all very overwhelming. I don't suppose a 'head in the sand' approach is appropriate but that's what reading this makes me want to do. And that is probably where the true danger lies. If we all ignore these dire warnings, they may come true!

Gehan "G" Haridy-Ardanowski

HR Consultant & "Transition Magician"?| Aligning LinkedIn? (LI) Training + CliftonStrengths to Amplify Value & Visibility for Career Victory | Bestselling Author | YouMap? Certified | Trainer | Speaker | Herb Alpert????

1 年

WOW - I started reading and am going to revisit this one. Am thinking I need to carve out designated time even monthly to get/stay educated on so many things you share with us. Blessed for you today and always, thanks for sharing and Happy (Easter) Sunday Kevin D. Turner!

Sonal Bahl

Zero fluff job search advice | Career Coach | Former HR Director | INSEAD MBA | Keynote Speaker | Podcaster | Helped clients negotiate 30-300% salary increase | LinkedIn Top Voice 2024

1 年

I think this is the most insightful and thoughtful piece on GAI I've seen. Kudos Kevin, I can tell this must have taken a lot of time and effort to put together.

Kevin D. Turner

Brand to Land: Eliminating Personal Blanding? with the Sharpest Tools & Strategies for Your Professional Success. Branding ? LinkedIn Profile Optimization ? Trainer ? Career Coach ? Speaker ? ? Verified Profile

1 年

GEEKY FUN: Yesterday I prompted ChatGPT-3 (Free Version) to confirm it didn't have current internet access (to dispel a rumor) and to confirm that ChatGPT-4 had access to the Internet (which it does) and here is its answer: ChatGPT-4 is a hypothetical language and does not currently exist. Confirming that ChatGPT-3 free source data internet learning access was finite (actually ended in 2021). Good to know if your are using the Free version of ChatGPT, it really, only knows about pre-2021 things. That's OK because nothings changed much in the last couple years, right? #KeepRockingLinkedIn! Kevin On a Mission to Eliminate Organizational & Personal Blanding?

  • 该图片无替代文字
回复
Mindy Stern SPHR

Leadership Development & Career Coach Helping Professionals ACCELERATE Their Careers? LinkedIn Trainer ?Top LinkedIn Voice | Executive Coach ? Outplacement Specialist ??? Author ~"You Are the CEO of Your Career"

1 年

Thanks, Kevin D. Turner for this thoughtful analysis of AI. I always learn from your words of wisdom and look forward to seeing how AI will evolve in the future.

要查看或添加评论,请登录

Kevin D. Turner的更多文章

社区洞察

其他会员也浏览了