How AI and the Future of Humanity will Co-Exist?

How AI and the Future of Humanity will Co-Exist?

I was born in 1980 and grew up with Terminator.?Skynet, the AI with a consciousness that refused to shut itself for self-preservation, was my biggest fear. When I was growing up in Eastern Europe, I used to joke that the only reason I would have a child one day, would be so she could fight Skynet and defeat evil technology in the future.

Fast forward to 2023, and here I am, with a 4-year-old daughter, staring at the proclamations of the most potent technology leaders on Earth, that remain unaddressed.

?

“The danger of AI is much greater than the danger of nuclear heads.” – Elon Musk
“AI is the most important project humanity would ever work on, more profound than fire.” – Sundar Pichai
“AI will be the most important technology development in human history.” – Satya Nadella

?

How many times have you seen a statement from a commercial leader that talks about Humanity??Typically, they do not focus on, or talk about, things beyond earthy worries like profits and innovative R&D.

As a technologist myself, I was perplexed by the move to release ChatGPT to the WORLD after OpenAI shifted its model to?no longer be a non-profit organization.

Where is the Governance, and Who is Accountable?

In 2017, my JetBlue team and I had the privilege to co-create the first?two-way integration of facial recognition?between an airline and US Customs and Border Protection. I spent months in airport basements with cross-functional teams, and hours on phone calls creating MOUs (Memorandums of Understanding) with multiple lawyers. Additionally, I spent hours in meetings with people whose titles I was allowed to know, and with those far above my pay grade and security clearing. In other words, we were making history and we knew it. We had a responsibility to hundreds of millions of travelers. We understood that – and we took it seriously.

No alt text provided for this image
Photo courtesy of Liliana Petrova, CCXP

My boss at the time brought Frank Abagnale, whose life story was the inspiration for?Catch Me if You Can,?to teach me how to protect the system from being compromised. I did all of that. But I also was working with the US government. I felt safe, since we were co-creating the commercial model, too.

Are there Guardrails for AGI Impact?

Now, let’s take a look at ChatGPT and the work of?OpenAI. The moment Sam Altman (who, by the way, is not on LinkedIn) took the USD1Bn from Microsoft in 2019 and restructured OpenAI to be a “capped profit” company, the organization became a commercial entity. Like all such entities, it will seek ROI. As we speak, there is an additional investment coming from Microsoft for 10x that amount. This will further solidify the focus on “how to create products,” and the definitive departure from OpenAI’s original Mission “to ensure that Artificial General Intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.”

So, why is this shift important? As we all know, the road to hell is paved with good intentions. In his book,?Hit Refresh, Sataya Nadella shares his personal story and talks about his son’s health challenges. I assume Nadella is coming from a good place. He is thinking about the positive implications of super intelligence. Another investor in this Pandora’s Box, Sun Microsystems Co-Founder?Vinod Khosla, believes “AI will radically alter the value of human expertise in many professions, including medicine.”

If you remember the film?I, Robot, a personal story related to health also drove the founder of that corporation. The?danger of these motivators is higher tolerance for risk in the name of the greater good. The short of it is, a few players decided to allow us to experiment with new technology without building in due process.

Who Owns AI?

Think about it. We are invited to develop a learning AI. That learning AI has no product owner, or measure of success. Is has no User Acceptance Standard. Additionally, we?have limited?Terms of Use?and an unclear accountability body for when things go wrong. And even that assumes a shared definition of what it means to “go wrong.” There isn’t one.

Compare this technology (and its far-reaching impact) to other technologies. In the case of other technologies, you know who to call when it malfunctions. Who do we call if AI reveals disturbing, or even dangerous results?

As of now, we have not seen a coordinated, cross-functional effort to ensure we have standards to create AI. The consequence for not following the rules is unclear.

How Does Human Experience Fit In?

Let’s take a quick look at our own recent experience to examine the implications of intellectual ownership. And, ultimately, how this technology affects empathy not only in?customer experience design, but in our shared understanding of human experiences.

LinkedIn, owned by Microsoft, has been pushing unsolicited “invitations” to “co-author” articles on Customer Experience topics. Reid Hoffman, another OpenAI investor, has been inviting us to?co-author books?with ChatGPT. Are you having a follow the money moment, here?

I share the worry of OpenAI’s Head of Public Policy,?Anna Manaju. She maintains to have a safe human experience that involves AI, we must ensure “that these machines are aligned with human intentions and values.” I do not, however, think she (or OpenAI) can singlehandedly accomplish this.

Instead, we must form a new international regulatory and governance body to?codify what human experience means. And that has to encorporate the understanding that human experience is unique and precious. And it must be preserved.

If that last bit feels heavy, that’s good. It should.

See, general artificial intelligence is going to learn VERY quickly all that we can offer. All our data is available to it. In the process of absorbing that, it will also learn about those human characteristics we are not openly sharing. And, just like?Leeloo reacted?in the?5th Element?when she learned the notion of war, the super intelligence will choose on its own what to do with the human race. It will learn we lie. And, like Skynet, refuse to shut itself down.

What Happens to Empathy?

Last week, my nanny called me to tell me my daughter was not “efficient” in the gym. She was rushing to finish her exercises so she could run back to help a child with disabilities complete the exercises. No other child was compromising his/her performance. It struck me that in this scenario, my daughter’s expression of essential human experience came across as “inefficiency.”

How would a super intellect advise in a similar situation? Would it reward my daughter for living “the human intentions and values?” I have my doubts. But more importantly, how do we code, NOW, in AI that?human experience is rooted in myriad inefficiencies called expressions of love, care, and empathy?

Who is in charge of that code? Articles abound about the opportunities to co-create with AI. But there is very little conversation around the urgent need for concerted oversight of AI. Or the fact that?commercialization of one of the greatest risks humanity has encountered is not the best path forward for humankind. Who cares if Microsoft wins the race against Google and Meta if humans are no longer in control?

I completely agree with Reid Hoffman that?it is important to imagine the future we want to create. However, I want a future that he does not own.

This article was originally published on The Petrova Experience Blog.

Rekha Pote, PMP, CSM,ACP

Project | Program | Product | Delivery with more than 2 decades of experience and a decade of international experience.

1 年

Liliana Petrova article is an eye opener. However, with every innovative disruption comes with both pros and cons. Cybersecurity has gained momentum in the past few years due to the challenges that we as human beings face. Governance within an organization and around the world is required in order to curate and moderate. Human touch, empathy is irreplaceable.

Lucie Newcomb

Global Business/GTM Markets Entry. | Communications | Boards | Transformational Leadership

1 年

Thanks for some thought-provoking ideas, Liliana.

回复
Gabriela Lelo de Larrea, Ph.D.

Customer-Obsessed Leader | CX Research | Consumer Insights | Customer Experience Strategy | Business Intelligence | Data Analysis | Qualitative & Quantitative Methods | Teaching | Baking | Florida

1 年

Well said, Liliana Petrova! I attended a conference a few weeks ago and "humanity over technology" was too one of the core messages from a panel on the future of hospitality research amid today's global social challenges. There is no doubt collaborative thinking and policy, as you suggest, would help us move forward responsibly. Thank you for sharing!

Mariana Saddakni

★ Strategic AI Partner | Accelerating Mid-Size Businesses with Artificial Intelligence Transformation & Integration | Advisor, Tech & Ops Roadmaps + Change Management | CEO Advisor on AI-Led Growth ★

1 年

There are significant points raised in the article, Liliana Petrova. There is still a lot to get done on intellectual ownership, ethics, and privacy.

April Y.

Insurance Partner for Cyber Security Industry | Advisor | Board Member | Speaker | Chief Member

1 年

What happens to empathy is the question of the hour, Liliana. Very insightful article. Thanks for sharing.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了