Human Intelligence (and Values) first!

Human Intelligence (and Values) first!

Artificial Intelligence (#AI) technologies have already impacted our society, in the way we consume news, plan our day, shop, and interact with our family, friends and colleagues. In ways large and small, almost without us noticing, artificial intelligence has become an integral part of our day-to-day lives. It powers the apps that show us the fastest way to get from place to place, lets video and music streaming services predict what we might want to watch or listen to, and enables spam filters to detect junk email and credit card companies to prevent fraud.

But this is just the start

AI-based solutions are beginning to transform people’s lives in ways that would have been difficult to imagine just a few years ago. Two decades from now, what will our world look like? And how do we ensure that AI systems will be designed and used responsibly, improving our planet? We are only in the early stages of understanding what Artificial Intelligence systems will be capable of. What we know now, is that beyond impacting our personal lives, AI will enable breakthrough advances. The rapid improvements in technologies such as neural networks and voice and visual recognition have brought us to the brink of a new era in which computers can perceive, learn, reason, and make recommendations. The impact of this transformation is so far-reaching that I believe we are in the early stages of the largest transformation we have ever experienced in the evolution of the human kind.

However, AI is only as good as the data that it learns from

I’m excited about the limitless opportunities that AI provides us. But I also recognize that it is important to build an ethical foundation to guide the cross-disciplinary development and use of artificial intelligence. As this technology gets more sophisticated and starts to play a larger role in society, it is imperative for companies and society to develop and adopt clear principles that guide its use, preventing bias to intrude.

At Microsoft, for example, we’ve developed an internal advisory committee to help us ensure our products adhere to principles such as fairness, reliability and safety, privacy and security, inclusivity, transparency and accountability. These guiding principles will help ensure the AI tools and services we create assist humanity and augment its capabilities.

We have adopted these six principles that we think will serve as a clear ethical framework to guide our ongoing work to develop and deploy AI

·     Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.

·     Reliability and safety: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.

·     Privacy and security: Like other cloud technologies, AI systems must both be secure and comply with privacy laws that regulate data collection, use, and storage, and ensure that personal information is used in accordance with privacy standards and protected from theft.

·     Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers that can unintentionally exclude people.

·     Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors, and unintended outcomes.

·     Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices of other areas, such as healthcare and privacy, and be observed during system design and as systems operate in the world.

We also believe that governments, industry, and civil society need to work together to create frameworks that build trust in AI

This must be a priority (and an opportunity, as well as a challenge) for our entire society. While we trust AI will help solve big social and planetarily issues, we must look toward the future with a critical eye. There will be challenges as well as opportunities – and that is why it’s crucial for our society’s future that as AI systems become more mainstream and that technologists work closely with governments, academia, all businesses, civil society and other stakeholders to reach a consensus about what values should govern AI development and use.

I am passionate about how Artificial Intelligence is transforming our world. We are just beginning to see a glimpse of the possibilities of what people and AI can achieve together. But we, as a society, need to act today with a sense of shared responsibility because AI won’t be created by the tech sector alone – the AI of tomorrow relies on us and the ethical foundation we create around it!

We must all work together on Artificial Intelligence leveraging the best of our Human Intelligences …and Human Values


Fabio Moioli

Executive Search Consultant and Director of the Board at Spencer Stuart; Forbes Technology Council Member; Faculty on AI at Harvard BR, SingularityU, PoliMi GSoM, UniMi; TEDx; ex Microsoft, Capgemini, McKinsey, Ericsson

4 年

this may be interesting for you, Marta Cenini! :-)

回复
Cindy Lai

live>empower>shine Life Coach /Vice President Players Theatre/ Clinical Nutritionist/ Physiologist/Aromatherapist

4 年

Thank you Fabio for sharing these clear ethical guidelines.?????

Fabio Moioli

Executive Search Consultant and Director of the Board at Spencer Stuart; Forbes Technology Council Member; Faculty on AI at Harvard BR, SingularityU, PoliMi GSoM, UniMi; TEDx; ex Microsoft, Capgemini, McKinsey, Ericsson

4 年

just yesterday we discussed #Artificial #Intelligence’s greatest #challenges and #risks with DLA Piper, UGI - Unione Giuristi per l'Impresa, Unione Giuristi per l'Impresa?and Giulio Coraggio, Partner e Capo del Settore Technology di DLA Piper, socio fondatore di IoTItaly Great perspectives and insights were shared by: Gualtiero Dragotti - Partner e Location Head dipartimento Intellectual Property & Technology DLA Piper Silvio Cavallo, General Counsel di Pillarstone., socio UGI Vanessa Giusti, General Counsel di Generali Shared Services Gabriele Mazzini, Legal and Policy Officer, Directorate-General for Communications Networks, Content and Technology (DG Connect), European Commission Tania Orrù, DPO e Senior Legal Manager di Brunello Cucinelli, socio UGI Valenti, Roberto e Ferrari Alessandro, Partner Intellectual Property & Technology di DLA Piper Luisella Giani, EMEA Industry Strategy Director di Oracle Fabio Moioli, Head of Consulting & Services di Microsoft Italia (me :-) Luca Sacchi, Head of Strategic Innovation di Piaggio Group Stefano Galli, MBA, MBA, Partner di Sprint Reply Antonella Loporchio, VP Product & Segment Marketing di Wolters Kluwer Italia?

Fabio Moioli

Executive Search Consultant and Director of the Board at Spencer Stuart; Forbes Technology Council Member; Faculty on AI at Harvard BR, SingularityU, PoliMi GSoM, UniMi; TEDx; ex Microsoft, Capgemini, McKinsey, Ericsson

5 年

Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters - from who designs it to who sits on the company boards and which ethical perspectives are included in creating and training it.

回复
Fabio Moioli

Executive Search Consultant and Director of the Board at Spencer Stuart; Forbes Technology Council Member; Faculty on AI at Harvard BR, SingularityU, PoliMi GSoM, UniMi; TEDx; ex Microsoft, Capgemini, McKinsey, Ericsson

5 年

This is becoming such a priority!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了