AI’s role in future space exploration and how we need to manage rather than restrict new technologies with Terry Virts, Astro Human at Group Of Humans
Photo taken from the ISS showing the Northern Lights. Copyright, Terry Virts.

AI’s role in future space exploration and how we need to manage rather than restrict new technologies with Terry Virts, Astro Human at Group Of Humans

Tell us a bit about your background and how you came to realise such a huge dream and become an astronaut…

No alt text provided for this image
COLONEL TERRY VIRTS International Space Station Commander Space Shuttle Pilot, Test/Fighter Pilot HUMAN

I grew up really really wanting to fly. As a kid I read a book called ‘The Right Stuff’ by Tom Wolfe and it had a big motivational effect on me. It showed me how to be an astronaut by first becoming a fighter pilot and test pilot – so that was the path I took. I figured I may as well try, so I just went out and did the things I needed to do not to check boxes but because it was my passion.

Eventually I ended up getting picked and became the youngest shuttle pilot at NASA. I was very fortunate but it taught me a valuable lesson to never tell yourself “no”. You never know if something’s going to happen or not, but if you tell yourself no, it definitely won’t. It doesn’t matter what your dream is, that first step is super important.

Then I had to wait a long time. It was almost a decade at NASA before I finally flew, but it was worth the wait – I spent more than seven months in space. While there I worked on the film? A Beautiful Planet and really fell in love with filmmaking, so after I left NASA I continued with it and started writing books as well.

Technology is obviously a huge part of space exploration, was early AI ever part of your experience?

It was – on my last space flight! I had a task list that included some science payloads, one of which was an AI experiment using software to help astronauts make certain decisions. This was obviously very limited in scope – it dealt with the water reclamation system, which is usually run and maintained by mission control from the ground.

It’s a lot cheaper to hire people on Earth than it is to use astronauts in space, so it’s logical NASA wants mission control to do as much as possible. That’s great if you’re in Earth orbit or even the moon where you’re only a couple of seconds between communications away, but when you go to Mars you may be minutes or tens of minutes away.?

If there’s some sudden decision that needs to be made on board, you can’t wait all that time for help from Houston. So this system I was testing was designed to help us make decisions in those situations. I’d call it AI-ish.

But I’d also already been introduced to that when I was a fighter pilot. The F-35 I flew used a computer to integrate all the different sensor data onto one screen and prioritise what to show – you know, these planes in green are friendly, these in red aren’t… which is great. If it’s correct.

That’s the big ‘if’ with AI isn’t it… How big a role do you think it will play in the future of space exploration?

All of aerospace is starting to use AI for things like this. And it’s great; it helps. But if we finally get to the point where the programme itself starts to learn deep in the bowels of the code, that’s when I think it gets pretty scary.?

At the moment, to a certain extent AI is just lots of memory; enough that it can study sufficient data to recognise patterns. ChatGPT for example has access to almost all human literature (hopefully it even read my books…) but it’s not intelligent, it’s just pattern reading from that data – artificial plagiarism, not artificial intelligence.

In space exploration that’s useful. AI can study the 30-year history of the space station and know better than us what’s broken or what might break.?

We once had an incident on the space station with a suspected ammonia leak that the computer had flagged, which would have killed the astronauts on board. Mission control almost drained all of our ammonia out into space, which would have been a disaster for the space station programme. Turns out the computer was actually spitting out bad ones and zeros because of some radiation problem – but if we’d had AI, it might have recognised the bad data immediately rather than taking us hours to figure it out.

So I think smarter AI can help you fly your spaceship and know if a problem is real or not, and there are two pillars to that. We can send data back to Earth to be crunched by AI there – what does this planet’s geology mean? Is that a sign of life? Where is interesting for us to explore??

And we can use it on board for more immediate decisions. Do I need to jump in the escape pod now? Do we want to land in that crater? Is that area dangerous??

How reliant do you feel we should be on new technologies that can have such a big impact on our lives?

If I was President, I would create what I call the SSAR cabinet position. I think the human race needs some thoughtful discussion about and serious regulation of social media, surveillance, AI and robotics – and the damage they can and are doing.?

I think they’re all a real threat. Social media can overthrow governments. The surveillance state we live in today would make Orwell jealous. Robots like drones are already killing machines.?

There are sensors all around us all the time gathering data and we’re being, or capable of being, tracked 24/7. That’s a really scary thing if you live in, say, China where you can’t get a job unless you’re morally pure. Just because we’re excited by the capabilities of all these technologies and what they can do, doesn’t mean we should do them.

For many reasons, a lot of people are saying that AI is a bigger threat. But honestly I think that’s overhyped right now; it’s going to take a lot longer than people think and we should be more concerned with more immediate problems. We need something like Isaac Asimov’s ‘three rules for robotics’: that they can’t hurt humans or allow them to be hurt, that they must obey human instructions, and that they must avoid situations that could cause them harm.

There’s no United Nations treaty, no international organisation, no standardised procedures or policies for any of them. What for example is an AI virus leaked from a lab like covid may have done? What are the containment and safety procedures? What could it do? How do we stop humans weaponising AI before it becomes a danger in and of itself? I think we haven’t thought through the potential consequences of any of this enough.

So what can we do?

SSAR is something I think I can work with Group of Humans to promote and highlight through our networks by connecting people.?

Organisations like the United Nations are wholly ineffective, unfortunately, to help against the threat SSAR pose, both political and human. Surveillance is enabling a totalitarian hell we need to work to avoid. People worry AI and robotics will take their jobs. But if we can manage these technologies as we have in the past, they can create new jobs, and increase productivity and efficiency. After all, we’re better off because we have cars rather than horses, right?

回复
Suki Fuller

WINNER 2023 Most Influential Women in UK Technology | Intelligence Fellow @ The Council of Competitive Intelligence Fellows | Speaker, Host, and Moderator | Board Advisor @ Tech London Advocates & Global Tech Advocates

1 年

Love this phrase..."artificial plagiarism, not artificial intelligence". I am so stealing this to use in my next discussion Terry. ?? ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了