My personal reflections on OpenAI events and AI governance.
It’s almost the end of November 2023, and for almost a full week, I have been closely monitoring the unfolding events at OpenAI , the organization behind ChatGPT. In just a matter of a few days, the company has experienced significant leadership changes, affecting both the board of directors and the executive level. These rapid transitions, with so many twists and turns, have raised critical questions about the future direction and priorities of OpenAI. Amid rapidly growing commercial interest in AI, these changes at OpenAI have raised concerns. Many worry that business interests might steer the company away from responsibly developing AI systems.
The recent events at OpenAI underscore the urgency of addressing the broader, profound questions surrounding the governance of emerging technologies. These developments bring to mind the insightful queries raised in Fred Guterl's 2014 Scientific American article titled "What Impact Will Emerging Technologies Have on Society?" which underscores the need for a deeper examination of this issue.
The article posed five critical questions about the future of technology:
Upon reading the article once more, I realized that nearly a decade later, we are still on a quest to find the answers to these enduring questions.
This realization prompted me to suggest high-level governance recommendations directly linked to the five profound questions highlighted in Guterl's article. It becomes evident that AI, abundant in potential, simultaneously presents significant challenges that need careful and ethical governance. Let's navigate each of the questions once more:
Will perfection 'on demand' turn us off?
Recommendation: Establish an Ethical AI Development Policy prioritizing ethical considerations over technical perfection.
Example: A leading software development firm adopts a policy where AI tools are designed with a 'human-in-the-loop' architecture. This ensures AI systems offer data-driven recommendations, while final decisions are made by human experts, maintaining ethical standards and human values in decision-making.
Will we all be the same?
Recommendation: Implement Diversity and Inclusion Standards for AI datasets and development teams.
Example: A legal analytics firm enforces diversity and inclusion standards, ensuring their AI systems are trained on varied legal data to prevent biases and enhance inclusivity in legal insights and predictions.
Will we give up our bodies as our last private space?
Recommendation: Develop strong Privacy Protection Protocols for personal data in AI and biometrics.
Example: A wearable tech company introduces advanced privacy protocols in health-monitoring devices, featuring end-to-end encryption and anonymization of biometric data, safeguarding individual health information.
Will computers replace our brains, hearts, and souls?
Recommendation: Create Human-Centric AI Guidelines that complement human intelligence and emotional capabilities.
领英推荐
Example: A financial analysis firm uses AI for market data interpretation, but human experts make final decisions, balancing AI insights with human experience and intuition.
Who will write the code?
Recommendation: Implement Transparent and Accountable AI Processes for decision-making in AI development.
Example: A global recruitment firm introduces a transparent AI system for candidate screening, sharing AI criteria and algorithms used, and establishing a review committee for regular audits, thus balancing automation with ethical and regulatory compliance.
These recommendations are practical and could be implemented to ensure that AI development is responsible, ethical, and beneficial for all.
Recently, as part of my work at Thoughtworks , a leading global technology consultancy that integrates strategy, design and software engineering to enable enterprises and technology disruptors to thrive, I had the opportunity to contribute to developing the United Nations' Responsible Tech Playbook . This vital initiative aligns technology use with the UN's mission, ensuring that digital tools and practices actively contribute to global welfare, human rights, and sustainability. The playbook is a dynamic, continuously evolving guide designed to steer UN teams towards the implementation of technology practices that are inclusive, conscious of biases, transparent, and grounded in ethical principles.
This work is a real example of organizations being intentional in their commitment to harnessing technology for the greater good of society.
In closing, Steve Jobs' profound insight resonates deeply with the themes we've explored:
"Technology is nothing. What's important is that you have faith in people, that they're basically good and smart -- and if you give them tools, they'll do wonderful things with them. Tools are just tools. They either work, or they don't work."
Jobs' belief underscores the essence of our discussion. To truly revolutionize and create impactful changes in the world, we must focus on the confluence of technology and the humanities. It's at this intersection that the most innovative and transformative ideas are born, ideas that harness technology not as an end in itself, but as a means to amplify our inherent goodness and intelligence.
I hope this article prompts us to reflect on the choices we have at our disposal: to utilize technology as a powerful instrument for driving positive and meaningful change in our world.
Abogada (UBA). Magíster en Derecho Penal. MBA (full time/english IAE Business School 2019). Doctorando Derecho (UBA). Poder Judicial de la Nación. Investigadora UBACYT.
5 个月https://www.derecho.uba.ar/academica/posgrados/cur_independientes.php
Manager Sales | Customer Relations, New Business Development
1 年Responsible technology is the key to innovation. Let's ensure we govern emerging technologies wisely.
The Weirdsdom Wiz? | Guiding You and Your Teams to Development through Increasing the Value of Diversity of Thought | CEO Expertise & Unique Solutions | DM me WEIRD to get started.
1 年Thank you for the examples for each of the dot points. certainly helped me contextualize them.
Strategic Leader: Managing Director & CFO | Author | Educator | Humanitarian | Committed to Impact
1 年Well said. Establishing governance for AI involves defining ethical guidelines, ensuring transparency, and fostering collaboration among stakeholders. Striking a balance between innovation and responsible use is crucial to navigate the evolving landscape of artificial intelligence.