AI on fire at last
Photo by Cullan Smith on Unsplash

AI on fire at last

OpenAI's ChatGPT erupted last November as an artificial intelligence (AI) chatbot. It surged to 100 million users in just two months, according to estimates from online intelligence platform Similarweb. On February 1, a UBS report partly relying on that data said ChatGPT may have scored those users faster than any other app in digital consumer history.?

The swift embrace of ChatGPT drove hands-on AI acceleration in business and beyond. Initial reactions ranged from ecstatic celebration to surprise, confusion, and the dark pit of dread,?as we all began to grapple with the spread of useable, accessible AI almost everywhere and all the time.

Up front?

Since ChatGPT's release, millions have discovered how flexible and simple it is to use. Draft a question to the app. Get a fast and credible reply. Often better than many humans can supply after a long delay. But don't yet expect to get it right every time. Software and people make a lot of mistakes.?

ChatGPT is based on a large language model (LLM) built atop a foundation model of natural language data. Different kinds of foundation models are or will be built on visual, audio, and other kinds of data as well. But today's foundation models have their limitations. Some types of data aren't yet compatible. Other types of AI are essential for important business and other uses.?

AI means business?

Generative AI can perform a vast array of business tasks, in whole or in part, that only humans could before?— like marketing and sales, R&D, and a blizzard of business processes.?The fates of human labor in this mix are not yet known. No technology in human history has sought to emulate and then exceed the best of human mental skills and self-awareness.?

Global consulting firm McKinsey has just produced a new report titled "The Economic Potential of Generative AI: The next productivity frontier." It contains 15 exhibits that project a large but differential rise in productivity for different industries, made possible by generative AI and other technologies.?

But the news is not so good for well-educated knowledge workers. Exhibits 10, 11, 12, and 13 in Chapter 3 graphically display McKinsey's speculations on a massive shift in work for educators, professionals, creatives, and knowledge workers who collaborate, manage, and possess expertise of any kind.?

Among this new report's updated key insights, McKinsey estimates that generative AI together with other technologies may automate 60% to 70% of workers' time in a process that will continue for many years to come. That's up from 50% in its earlier research.

Who will pay to reskill and upskill this army of workers across all industries, and reconfigure or invent all those jobs to employ them? Much will depend on who or what will guide the future of this new AI. But for now we know that generative AI grows more capable with each iteration, as top AI providers grow their markets and boost their reputations.

Fast tracks

Most companies will likely buy generative AI off the shelf just now, and tweak it to suit their purposes. Multinationals may use this new AI in different ways in countries under different laws. A large domestic enterprise may use generative AI for different outcomes in separate divisions.?Others will customize their investments in this AI by inserting their own data into an existing LLM through the techniques of prompt engineering.?

But some may boost their market dominance and raise barriers to entry through innovative home-grown plays:

  • Quick off the mark, tech titan?Microsoft?invested US$10 billion in OpenAI last January, and is working hard to launch two new AI apps based on its partner's foundation models. The company presents Microsoft 365 Copilot as "your copilot for work." While Microsoft Security Copilot empowers "defenders at the speed of AI." This product is intended to bolster the already powerful cybersecurity apps on offer from Microsoft, while helping to reduce the effects of a yawning global gap in cybersecurity talent.
  • Bloomberg?is a leading global business-news organization and colossal financial-data engine. At the end of March, the company published a research paper presenting its plan to create BloombergGPT, an LLM with 50 billion parameters?designed to meet the financial industry's natural language processing (NLP) needs, while hitting all the benchmarks of a general-purpose LLM as well.
  • In late April, the U.S. branch of global Big Four accounting and consulting firm?PwC?announced a US$1?billion investment with Microsoft over three years to scale PwC's embedded AI operations across tax, audit, and consulting through ChatGPT-4/ChatGPT and Microsoft's Azure OpenAI Service. PwC will also upskill its 65,000 U.S. workers in the use of generative AI to increase productivity, strengthen careers, and better serve the firm's clients.
  • In early May, tech legend?IBM?announced a new platform called "watsonx" for foundation models and generative AI. The platform will offer a studio, data store, and governance toolkit as separate services. Big Blue will also collaborate with AI start-up Hugging Face to offer open-source AI models. As part of this initiative, IBM Consulting has announced a "Center of Excellence" for generative AI, with more than 1,000 AI experts available to help their clients through an AI-enabled business transformation.

A world of risk

The stakes are high for business. The Eurasia Group's Top Risks 2023 includes Risk 3 titled "Weapons of Mass Disruption." It's all about AI. Amusingly or not, ChatGPT created the title in "under five seconds." Among its many threats, Risk 3 says generative AI might make it tough for investors and any business to tell actual "engagement and sentiment" from "sabotage attempts" by a hacker, competitor, or activist investor.

In the hands of bad actors, generative AI can bring a lot of trouble with it — from fake news, disinformation, and impersonation to civil rights and IP violations. Some of these offenses can threaten public institutions and stoke communal violence when they inflame the gullible and uninformed. But It may be difficult for anyone to tell reality from fiction in the spell of this AI.?

The hunt for trust?

The U.S., EU, and China are the world's three largest markets. No surprise their governments want to limit politically destabilizing and criminal use of generative AI. But there's a catch. What about the use of this AI within and through the powers of these governments?

China?moved fast and holds the lead. The government published its new proposals for AI regulation on May 8. The push is on to review and pass these measures into law that can take effect this year. Composed of 21 articles, the draft guarantees consumer protections and well-being, AI transparency, and strong IP protections.?

They also contain declarations of government authority and a stern, quick willingness to punish any noncompliant AI provider. So far as the private sector is concerned, China's all in on trust.?That should sell well at home and abroad, including the underserved and rising Global South.

The?EU?has 27 sovereign members. They must all agree on the content of their long-discussed AI Act. Protection versus innovation is a choke point. First drafts were based on specific risky situations. But the burst of ChatGPT onto the European scene caused a fundamental shift in their intended regulations. Now the authors hope to finalize an act that will impose transparency on the use of generative AI in nearly any situation, as well as ban AI-driven predictive policing and AI-enabled facial recognition in public locations.?

Scheduled for a vote in June, any bill that's passed must enter talks among three separate European institutions before an AI Act can become law. Assuming a final act is passed by year end, it will not likely take effect until 2026,?at least?two years after China. Though a voluntary code to bridge that gap is under discussion.

The?U.S.?lags China and the EU in AI regulation. The "Blueprint for an AI Bill of Rights" and "AI Risk Management Framework" are two voluntary?codes created by the White House Office of Science and Technology and the National Institute of Standards and Technology (NIST), respectively.?

On May 4, Vice President Kamala Harris held a closed-door meeting at the White House among a few leading AI providers and key administration officials. The?meeting included a "frank and constructive" exchange about the need for AI providers to be more transparent about their products with policymakers, and to test those products for their safety before they go public.

The next day, White House-summit attendee and OpenAI CEO Sam Altman said he supports the right of people to know when they're engaging with an AI. He also said his company's working on new models that might make it possible to compensate?someone when an AI system uses that person's content or creative style.?

On May 16, OpenAI's Altman, IBM's Chief Privacy and Trust Officer Christina Montgomery, and NYU professor emeritus Gary Marcus testified before a Senate Judiciary subcommittee on a host of contentious AI-related issues. It's not yet clear what action the U.S. Senate may or will not take on AI regulation.?

On May 19, leaders at the?G-7 summit?in Hiroshima, Japan, spoke of the need to govern use of generative AI in compliance with shared G-7 values. The summit leaders announced a "Hiroshima Process" at the cabinet level to discuss the potential for a shared governance of generative AI, and report back to the leaders by the end of 2023. Yet, each member nation will pursue its own view of AI regulation to suit its sovereign interests.

Walking the line

On May 22, ChatGPT provider OpenAI posted a lengthy statement on its website titled "Governance of Superintelligence." A thoughtful, optimistic, and balanced manifesto full of nuance, two points jump out:

Within ten years,?it's "conceivable" that "AI systems" will surpass the skills of human experts "in most domains," and equal the output of even the largest corporation.?

The powers of AI are similar to nuclear energy and synthetic biology. Like those technologies, "superintelligence" needs effective regulation to realize its great promise, while avoiding a potential existential risk for humanity. That means a multilateral institution like the International Atomic Energy Agency (IAEA) may well be necessary at some point to inspect, audit, test, and place restrictions on the installation of AI above a given threshold.

We own AI

At a meeting with students in September 2017, Russian President Vladimir Putin told them the first nation to master AI "will become the ruler of the world." The dark side is never far away.

Adoption of this new AI now quickens by the week. Countries boast of policies to govern it. Business leaders sell their plans for AI-leveraged growth. Frenzied traders snap up anything AI.?

Modern humans stand alone as the sole surviving species of an ancient primate genus buried in the dust of time. All the flavors of this new AI will come alive in a world long gone human. For now and uncertain time to come, the choice will still be ours to use this awesome new AI as a tool to build a future more benign — or a weapon to deceive, divide, subjugate, and threaten others of our kind.

____________________

For more information, see these sites, last accessed on June 19, 2023:

OpenAI?manifesto: https://openai.com/blog/governance-of-superintelligence

McKinsey?report on generative AI: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

Eurasia Group, Top Risks 2023, Risk 3, "Weapons of Mass Disruption": https://www.eurasiagroup.net/live-post/top-risks-2023-3-Weapons-of-mass-disruption

BloombergGPT, Here's what it takes to create an industry LLM: https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/?linkId=207795121

TIME,?The A to Z of Artificial Intelligence, a technical and business glossary: https://time.com/6271657/a-to-z-of-artificial-intelligence/

Thank you so much for your reaction, Asha.

回复

要查看或添加评论,请登录

John Atkinson的更多文章

  • Securing new trust in AI

    Securing new trust in AI

    As companies race to embrace a new generative form of AI, winners must learn to secure and sustain new trust in this…

    1 条评论
  • Net-zero data portal to accelerate transition

    Net-zero data portal to accelerate transition

    BloombergNEF reports investment in clean energy shot up 31% last year, to set a record of US$1.1 trillion and draw even…

社区洞察

其他会员也浏览了