The ethics of Artificial Intelligence with Suki Fuller, competitive strategic advisor and analytical storyteller with Group Of Humans
Tell us about your background and how you got into tech security…
It’s been a roller coaster ride! I started out in chemical engineering but didn’t like being alone in a lab eight hours a day. Through my academic studies I came across a program for intelligence studies, got accepted, and loved it. I got trained to be an analyst in theory and practice, worked on national security projects, in law, enforcement and corporate, and at the same time I was in the US Army reserves.?
So when I left academia I walked straight into a role working for the Department of Defense and at a certain three-letter agency working in technology intelligence, which is where the seeds of my career in technology were planted.?
What are your views on the recent rapid growth and adoption of AI?
It’s exciting but also scary. I look at what’s happening with AI and see it as an allegory of life, because AI has grown from all the input of everyone, just as a life is grown from the input of its experiences. But right now AI is growing really rapidly and that’s the scary part. Humans have the ability to stop, review what we’ve done – good and bad – reassess and change direction. But AI isn’t really learning from past behaviour, especially right now it’s just a bunch of prompts responding to closed questions, which is a problem. Humans have open-ended conversations, ambiguity and nuance based on ethics and interpretation. What’s worrisome is that at the very core of AI, there’s no ethical framework in place.?
Is technological development outpacing the ethics and frameworks AI should be built on?
Oh yeah. It’s outpacing all legislation. If you look at privacy and security, these have crept along getting more and more legislated as they’ve developed but AI has just overshot that – and that’s because everything in AI includes what you have in privacy and security. So everybody’s saying ‘Oh it doesn’t really matter, that’s already covered for AI’. But the thing is, AI includes everything else, every aspect of our lives, because that’s what we’re putting into it – so it’s collecting people’s data unauthorised; not hard data like a date of birth, but the data that’s in our brains.
What we needed at the very beginning was to lay out the principle groundwork of what you can and cannot do with AI. But we’re past that point and can’t go back. So the question now is how to set up a framework going forward, but that’s very hard to put in place for something that’s already running out of control.
领英推荐
Do you think it’s possible to create an ethical framework for AI now?
I have good days and bad days! I think of it like gun control in the US – there are people with semi-automatic and military grade weapons already, and the government wants to create legislation to say you’re not allowed to have these any more. Realistically, how are you going to be able to enforce that? And it’s not just AI – it’s the same with any technology: once it’s been released, how do we contain it if we live in a free world society rather than a closed gate society?
Certainly the answer isn’t to ban it as is happening in countries like Russia, China and North Korea. There are some UN frameworks in place around the ethical use of technology already, like GDPR, but what I find frustrating is that there isn’t a single body governing its use, just like there isn’t for the world wide web. Instead you have different countries with competing agendas creating their own legislation that can’t be applied globally so it gets bypassed. And I just don’t see that changing.
But do you see AI being used in a positive way, to promote diversity and find solutions to big social and environmental problems or our age?
Absolutely. There’s so much positive possibility in what AI can do and I’m excited by that, but right now my concerns are winning out because the positive agenda to be ethical and inclusive is often seen as a restriction on freedom of expression. It’s a reiteration of how we’ve built other technologies, most of which were not built inclusively. But hey, I do think my background in intelligence means I’m always looking first for threats rather than opportunities!
But there are always people who will strive to constantly find opportunities to build for the greater good. Unfortunately, I feel there’s an imbalance between them and people more interested in self gain. We’re a liberal society and although the dream is that we should all be working together with the same core agenda, the simple truth is we all want different things and we can’t please all the people all the time.
In large corporations it’s getting worse though. Microsoft just fired its whole AI ethics and society team! I mean – these are the people you need, the people asking questions and challenging how and where we use technology, and making sure its ethical principles manifest in its product design. Instead companies are fighting to see how far they can advance their tech and profits and that’s coming at the expense of accountability.
So is this creating a foundational gap in the holistic design of AI products and services?
I think AI is like a reflection of how the world is right now: it’s getting more connected and more out of control. A lack of accountability at a foundational level means a lack of protections and oversight for everybody using AI. And no one has actually gone back to the foundations AI was built on, to the core legacy systems, and looked for that failure point – instead they’ve just kept building on it and hoped the problems would get creased out along the way, which of course hasn’t happened.
When you look at AI services now – at Chat GPT4, at Midjourney etc – and you look at their capabilities, you can see we’re already past the point of no return. Now it’s just a matter of people realising how much they’ve given away already, of being aware of when, where and how that data is being used, and trying to gain some control of that. That’s one reason why I love being part of GROUP OF HUMANS. Our diversity and range of expertise and experiences means there are lots of viewpoints and perspectives on AI that we can tap into to provide a more holistic view on how it can and should be used. We find commonalities, problem solve collectively, create new approaches, and as part of that I get to drive the ethical side forwards, which feels pretty good.
WINNER 2024 Powerlist: Meta - Tech, AI and Innovation Award | Intelligence Fellow @ The Council of Competitive Intelligence Fellows | Speaker, Host, Moderator | Board Advisor Tech London Advocates & Global Tech Advocates
1 年Thank you GROUP OF HUMANS? for allowing me to share my thoughts and hopefully just make people more aware than frightened!
Ethical product leader, Fractional CEO, CPO, Investor, and NED. 28 years start-up and scale-up experience in the EU and US. Proven track record in year-on-year business growth and M&A.
1 年Suki, Good high level discussion. A couple of key points, and the reason (for me) why it’s scary. Is emotion a good thing? Humans created ethical frameworks for the good of humans, not for other key issues like the health of the planet and survival of other species to keep the ecosystem equilibrium. Would the coldness,binary and non emotional AI make important decisions for the benefit of the entire ecosystem, rather than that of our own selfish species? Or would it take a Thanos click of the fingers approach. After all: wars, greed, inequity, etc etc, are all caused by human emotions (not the good ones). The second point: It’s hard to see how AI does not completely flatten the entire ‘thinking’ workforce, only leaving ‘doing’ jobs. Put it this way: I am not sure our children should be blindly heading off to university to study a subject that is already obsolete. What should they focus on instead? That’s the big dilemma as a parent. Making, farming, feeding, and fixing could be the only jobs left? I think that AI will force humanity back to the skills we have left behind.. Time to pick up real tools again and make things people want, that are valuable. It will be a fascinating few years….. thanks for posting .
Strategic Creative Director | B Corp Founder | Positive-Impact Focused | Purpose-Driven Leader
1 年Thanks for sharing your professional opinion Suki Fuller. Whilst AI can be useful within the creative industry as an efficiency tool, I've personally got no idea of the potential issues and pitfalls. ????
Scholar researching creativity in business ? Artist ? Circular Economy Advocate ? Conduit for Positive Impact ? Hult France Chapter Lead ? #madetodo #standwithUkraine #notowar
1 年Superb read with such clarity. Thanks!
ALL OF US GROUP
1 年Brilliant.