My personal reflections on OpenAI events and AI governance.

My personal reflections on OpenAI events and AI governance.

It’s almost the end of November 2023, and for almost a full week, I have been closely monitoring the unfolding events at OpenAI , the organization behind ChatGPT. In just a matter of a few days, the company has experienced significant leadership changes, affecting both the board of directors and the executive level. These rapid transitions, with so many twists and turns, have raised critical questions about the future direction and priorities of OpenAI. Amid rapidly growing commercial interest in AI, these changes at OpenAI have raised concerns. Many worry that business interests might steer the company away from responsibly developing AI systems.

The recent events at OpenAI underscore the urgency of addressing the broader, profound questions surrounding the governance of emerging technologies. These developments bring to mind the insightful queries raised in Fred Guterl's 2014 Scientific American article titled "What Impact Will Emerging Technologies Have on Society?" which underscores the need for a deeper examination of this issue.

The article posed five critical questions about the future of technology:

  1. "Will perfection 'on demand' turn us off?" This question prompts us to consider if instant self-enhancement could erode our motivation and sense of purpose. If perfection becomes easily attainable, we might find ourselves in an existential crisis, questioning what drives us.
  2. "Will we all be the same?" Building on the initial query, the second question in Guterl's article delves into the implications of personal genomics and its ability to "design" babies with chosen physical and intellectual attributes. This possibility raises concerns about a potential decline in human diversity and a possible stagnation in societal progress. More troubling is the prospect of creating a new divide, between those who are genetically enhanced and those who are not, the article highlights “what will happen if not everyone has access to these technologies or if some decide to “opt out”?
  3. "Will we give up our bodies as our last private space?" This question contemplates the implications of technology's integration with our biology. It raises a critical concern, as we advance technologically, are we at risk of sacrificing our final stronghold of personal privacy? We find ourselves weighing the advantages of these technological advancements against the profound cost of losing such an intimate aspect of our lives.
  4. "Will computers replace our brains, hearts, and souls?" This question warns of the risk of technology making complex decisions for us, perhaps overshadowing human intuition and values. It urges us to safeguard our intrinsic qualities like individuality and freedom of choice.
  5. "Who will write the code?" Finally, as we become more dependent on software and data, this question raises the issue of accountability and control. It leads to concerns about entrusting too much power to a digital ecosystem overseen by a few.

Upon reading the article once more, I realized that nearly a decade later, we are still on a quest to find the answers to these enduring questions.

This realization prompted me to suggest high-level governance recommendations directly linked to the five profound questions highlighted in Guterl's article. It becomes evident that AI, abundant in potential, simultaneously presents significant challenges that need careful and ethical governance. Let's navigate each of the questions once more:

Will perfection 'on demand' turn us off?

Recommendation: Establish an Ethical AI Development Policy prioritizing ethical considerations over technical perfection.

Example: A leading software development firm adopts a policy where AI tools are designed with a 'human-in-the-loop' architecture. This ensures AI systems offer data-driven recommendations, while final decisions are made by human experts, maintaining ethical standards and human values in decision-making.

Will we all be the same?

Recommendation: Implement Diversity and Inclusion Standards for AI datasets and development teams.

Example: A legal analytics firm enforces diversity and inclusion standards, ensuring their AI systems are trained on varied legal data to prevent biases and enhance inclusivity in legal insights and predictions.

Will we give up our bodies as our last private space?

Recommendation: Develop strong Privacy Protection Protocols for personal data in AI and biometrics.

Example: A wearable tech company introduces advanced privacy protocols in health-monitoring devices, featuring end-to-end encryption and anonymization of biometric data, safeguarding individual health information.

Will computers replace our brains, hearts, and souls?

Recommendation: Create Human-Centric AI Guidelines that complement human intelligence and emotional capabilities.

Example: A financial analysis firm uses AI for market data interpretation, but human experts make final decisions, balancing AI insights with human experience and intuition.

Who will write the code?

Recommendation: Implement Transparent and Accountable AI Processes for decision-making in AI development.

Example: A global recruitment firm introduces a transparent AI system for candidate screening, sharing AI criteria and algorithms used, and establishing a review committee for regular audits, thus balancing automation with ethical and regulatory compliance.

These recommendations are practical and could be implemented to ensure that AI development is responsible, ethical, and beneficial for all.

Recently, as part of my work at Thoughtworks , a leading global technology consultancy that integrates strategy, design and software engineering to enable enterprises and technology disruptors to thrive, I had the opportunity to contribute to developing the United Nations' Responsible Tech Playbook . This vital initiative aligns technology use with the UN's mission, ensuring that digital tools and practices actively contribute to global welfare, human rights, and sustainability. The playbook is a dynamic, continuously evolving guide designed to steer UN teams towards the implementation of technology practices that are inclusive, conscious of biases, transparent, and grounded in ethical principles.

This work is a real example of organizations being intentional in their commitment to harnessing technology for the greater good of society.

In closing, Steve Jobs' profound insight resonates deeply with the themes we've explored:

"Technology is nothing. What's important is that you have faith in people, that they're basically good and smart -- and if you give them tools, they'll do wonderful things with them. Tools are just tools. They either work, or they don't work."

Jobs' belief underscores the essence of our discussion. To truly revolutionize and create impactful changes in the world, we must focus on the confluence of technology and the humanities. It's at this intersection that the most innovative and transformative ideas are born, ideas that harness technology not as an end in itself, but as a means to amplify our inherent goodness and intelligence.

I hope this article prompts us to reflect on the choices we have at our disposal: to utilize technology as a powerful instrument for driving positive and meaningful change in our world.






María de los Milagros Franco von Oertel

Abogada (UBA). Magíster en Derecho Penal. MBA (full time/english IAE Business School 2019). Doctorando Derecho (UBA). Poder Judicial de la Nación. Investigadora UBACYT.

5 个月
Haitham Khalid

Manager Sales | Customer Relations, New Business Development

1 年

Responsible technology is the key to innovation. Let's ensure we govern emerging technologies wisely.

Trish Goodfield

The Weirdsdom Wiz? | Guiding You and Your Teams to Development through Increasing the Value of Diversity of Thought | CEO Expertise & Unique Solutions | DM me WEIRD to get started.

1 年

Thank you for the examples for each of the dot points. certainly helped me contextualize them.

Ramiro J. Atristain-Carrion

Strategic Leader: Managing Director & CFO | Author | Educator | Humanitarian | Committed to Impact

1 年

Well said. Establishing governance for AI involves defining ethical guidelines, ensuring transparency, and fostering collaboration among stakeholders. Striking a balance between innovation and responsible use is crucial to navigate the evolving landscape of artificial intelligence.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了