CSIRO’s Stefan Hajkowicz on the ethical future of artificial intelligence

CSIRO’s Stefan Hajkowicz on the ethical future of artificial intelligence

OCT 24, 2019 | 14 MIN

Stefan Hajkowicz leads CSIRO Futures which is a team of researchers, analysts and consultants working on strategy and foresight projects to help government and industry organisations adapt to a rapidly changing world. At CIO Edge, Hajkowicz spoke to ADAPT’s Partner Matthew Hanley about the future of AI, its ethics, how to lead it and how it’s being collaborated on.

Matthew Hanley:

I’m with Stefan Hajkowicz from Data61 and the CSIRO. Welcome back.

Stefan Hajkowicz:

Thank you very much.

Matthew Hanley:

You’re a familiar face and well-received by our audience. Again, artificial intelligence is creeping higher and higher as a top priority for CIOs. You were presenting today a really interesting topic. Would you be able to elaborate to the audience who missed your session what you were speaking about and some of the key takeaways?

Stefan Hajkowicz:

So really today was about what artificial intelligence is and why it matters, what are all the sub-branches of science, and where might it go in the future, what it’s going to be capable of. So the first part of the presentation was focused on the power of this technology.

I would say it’s like when humanity was inventing fire or when we were inventing electricity, we’re now inventing artificial intelligence.”

It’s machine learning, which is a subfield within artificial intelligence, which is about giving the machine the ability to problem solve on its own without explicit guidance from a human being, and with deep neural nets and various types of deep machine learning, deep learning, we’ve seen some pretty significant breakthroughs and there’s every expectation that that continues. The platform of technological capability is really high today and we are likely to see that continue in machine learning to get it unleashed into the world. So it’s exciting, and most of the presentation was about that. But then we got on what are the ethical implications, what are the practical implications for your business and your career, what do you need to do? And I do believe that by 2030 that AI has got into every career and every industry and every job and really reshaped how we’re doing things. And if we do it well, it’s excellent, it solves a lot of the biggest problems for us.

Matthew Hanley:

I’m a big movie buff, sci-fi buff. I know in the 1980s we saw a lot of films which were predicting what AI was going to be. In recent years the public probably has seen Google’s AlphaGo and how intelligent AI is getting. How much of an impact is it going to have in let’s just say the next 11 years then, from 2019 to 2030? What are your predictions there?

No alt text provided for this image

Stefan Hajkowicz:

So our analysis revealed there’s no pathway for artificial general intelligence, this sort of science fiction interpretations where artificial intelligence is acting of its own motivation. We can’t find that. Interesting to think about, shouldn’t be dismissed, and a lot of research in AI sort of breaks these two categories. One is narrow artificial intelligence, which is solving a particular problem, like the strawberry harvesting robot that knows it’s got to reach out and pick a strawberry. A complex task for a machine, but it still has a set of objectives.

Artificial general intelligence is a robot with no clear objective that is sorting out its own objectives, which scares us all to death.”

However, there is no way we can sort of see that happening where we’re at the moment. It’s a lot more mundane but incredibly potent. So, boring artificial intelligence is still very powerful.

Matthew Hanley:

And there’s this whole dynamic people have which is the robots will steal our jobs, but I think maybe it’s just going to automate those jobs which were so mundane anyway to do.

Stefan Hajkowicz:

Right. So accountants didn’t lose their jobs to spreadsheets, smart accountants learned how to use spreadsheets and got better jobs. For a lot of the profession, it’s like this. So a small part of the profession, the most recent analysis by the OECD suggests about 14% of some jobs, train drivers, checkout operators, we might put at high risk because you can see how the entire job disappears when the system is automated. They can still transition into different jobs, but that job might go. But for the bulk of professions, the job doesn’t disappear, it changes the nature of the job. As a scientist, I need to learn Python, I need to learn TensorFlow, I need to learn data science, because by 2025 I’m not that useful if I can’t do these things, or it significantly improves my ability to do science. All science is kind of becoming data science and artificial intelligence-enabled, too, it’ll increase the usefulness of me. So it changes the skillset and the skills profile. But it’s about transition, it’s about learning to plug your skills into what the machine does, and ironically, your humanness is what gets more valuable in this world.

Being a human in a world of robots is good because you’re differentiated.”

So their emotional intelligence, their reasoning, judgement, the ability to handle ambiguity and complexity, robots are hopeless at this sort of stuff and are going to be for a very long time.

Matthew Hanley:

How are the ethics of AI at the moment created? How does that structure come together?

Stefan Hajkowicz:

I want to say I think AI is going to improve the ethical performance of humanity really a lot. AI can be better than us if we train it the right way. Feed the wrong data into AI, it can be unethical. The Microsoft chatbots that turned racist and misogynist within minutes and had to get unplugged when it went onto Twitter. That was because it had read all that stuff online and had learned to behave that way. Had they fed it something better or created rules and structures, it would have been better. But I think, ultimately, a lot of miscarriages of justice happen in the criminal justice system due to human error and human bias, and AI can help eliminate some of that.

AI is not something that is going to make ethics worse. It will improve our ability to be ethical but we need to be sure we’re doing it the right way.”

Privacy gets really important. AI can compromise people’s privacy more than ever before. So we need to be acutely aware of the risks to your privacy if a piece of AI is mashing data from one source with another source, the possibility it inadvertently or purposefully releases information about you that it shouldn’t do, we need to stop that. Privacy is a human right and we need to make sure it exerts, and that can be done. We need to ensure that there is transparency and explainability. For an AI algorithm that makes a significant decision about you, that impacts your life, we want to make sure that you have the ability to say why. If you get knocked back from a job that you applied for because a piece of AI screened you out, you want to know why and how, and I think that’s fair enough. Contestability is another principle that we think is important, that you have the ability to contend the decision of an AI.

So AI done well has a lot of these attributes, it’s humanised, a very human-centred approach to AI. But I am highly optimistic that AI is about improving the criminal justice system, it’s about improving the fairness of job selection processes, it’s about improving fairness and equity in all sorts of environments if done well.

No alt text provided for this image

Matthew Hanley:

And I know we were speaking earlier about the recession in 2008 when the global financial crisis was happening and how the rest of the world probably had to tighten up and become more innovative, and Australia was kind of really leaning back on iron ore and oil and gas and mining. What needs to happen now with AI in Australia?

Stefan Hajkowicz:

So during the 2008-09 financial crisis years, the rest of the world was getting smashed compared to us. We did really well. Employment was good, GDP growth rates didn’t go into negative, we went through that very well. But then again, the mineral and agricultural commodities we were exporting, particularly the mineral commodities like iron ore and coal prices were very strong. It has since fallen away. But I think there is a risk of complacency. Technology is about to really alter the landscape and there’s no ‘it should be all right, mate’, here. This is something we’ve gotta keep pace with because the rest of the world is moving super fast. So billions and billions of dollars have been invested in the last three years in AI that we can count, but we can see so much capability development, advances in research and development, occurring right across the world. Especially the new Asia-Pacific region has become a new champion and high contender in AI capability. South Korea, Taiwan, China, Singapore, are all doing incredible stuff with AI. So these capabilities will effectively come to fruition in a decade or so, and Australian industry and Australian workers need the same sort of capabilities to compete in this real-world market that we’re moving towards.

Matthew Hanley:

What collaboration is happening on the ground here then? What groups are starting to form to really get this up to government level and to make a change?

Stefan Hajkowicz:

There’s a lot going on in Australia and we’ve had 10 years worth of R&D and various aspects of AI here. We’ve got a deep capability in various fields of AI, too. The QUT Robotic Vision Centre is fantastic, it’s world-class in what it’s doing, the Australian Centre for Field Robots. I think our analysis would point to what’s three core areas of expertise that Australia could capitalise on. One is AI for the great outdoors. Agriculture, mining, digging, searching, rescuing, all sorts of outdoor activities with AI, we do really well, especially mining. A lot of the Pilbara is fully automated, that’s in Western Australia. Age and healthcare. Australia’s developed the bionic eye which can provide vision to people who are blind with certain types of conditions. It’s amazing. This technology uses AI, uses other technologies as well, but it’s incredible in what is capable of doing. But we can look to fields of health and health sciences and find that AI is being something we’ve developed for a long time to come.

The CSIRO has worked on predicting, using AI and machine learning to predict how many people are going to come into a hospital at certain times so resources can be scheduled.”

We’re good in that space. Cities and infrastructure is something else we’re pretty good at. The Sydney traffic light scheduling system, I think it’s called SCATS, I can’t remember the exact name for the acronym, but that’s decade-old-year technology that’s been sold around the world and makes traffic around the world flow a lot more efficient out of Australia. And the vehicle to vehicle communication. We’re not going to do everything on automated cars, probably Germany and the United States are leading the charge. But there are parts of that marketplace, that massive $100 billion-plus global market that we’re really good at. Vehicle to vehicle communication is one of those. We’ve got startups in Australia that are leading the front on that sort of work. So there are spaces that we’re really strong, I think that the challenges are a concerted national effort now to give it profile. The sort of things that need to happen is workforce up-skilling and repositioning. People need to be transitioning from current job, well, current tasks and skillets into new ones, from the individual levels to the whole workforce. Industry enablement. I think in R&D, we want to pick up some of these challenges. We’re not just an AI taker, we’re an AI maker as well. In some spaces, we can buy AI, but we’re a maker. There is also a bit of an issue around sovereign risk. You can’t just buy AI from somewhere unless you know what’s in it, especially if it’s running the traffic grid or if it’s scheduling aircraft or operating the weather forecasting system, and we can’t put mission-critical applications into AI.

We’ve got to absolutely know what’s under the hood or we’ve got to build it ourselves.”

o there’s this sovereign capability requirement in there as well. But there are a lot of things that we can be doing. So we’ve got the capability and I think the next step is concentrating that at the national level for something at scale, visibility, that really has an impact.

No alt text provided for this image

Matthew Hanley:

What’s stopping Australia being the leader in AI? I look at things like Wi-Fi or other processes which have been developed in this country, I think is now not the time where we grab something like this at this stage and we become a nation who leads in this area? I’m sure there are other countries doing it well, but is there anything holding Australia back there?

Stefan Hajkowicz:

There shouldn’t be. I think it’s a fair question, but let’s watch this space over the coming years and start to see how it takes shape.

I absolutely believe there are some spaces where Australia leads the globe in terms of AI development, building new industries to solve big problems at home, such as aged care or the social challenges with drought in the agricultural sector.”

AI actually comes into play to help solve these national dilemmas here. But as we find the solutions with AI, we can build new industries which sell those solutions to the rest of the world. We’re not the only ones dealing with drought, we’re not the only ones dealing with aged care, we’re not the only ones requiring smarter city infrastructure. Cleverly selected, my belief is that we actually do pursue a specialisation strategy in technology. Some people would say you’ve got to be agnostic and let the marketplace choose, that’s fairly true. I think we actually do pick winners. We try and select some areas and we invest. We’re ready to change and adapt as we go along. But in actual facts, to not do that, Singapore built a shipping port before Asia knew it needed one and it worked kind of well. They didn’t build an airport because Asia knew it needed one. They’re building a fintech sector currently which puts them at the centre of fintech. Singapore is pretty damn strong. Its GDP per capita is above ours. It’s looking really good. So I actually am a believer in smart, adaptive technological specialisation, and I think Australia can absolutely do this. We can be an AI maker and seller in certain spaces, in other areas we’re a fast follower and a quick adapter. But there’s no unplug option here, there’s no will it or won’t it happen. It’s happening and we’re in the middle of it, and we need to ride this wave.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了