Being A Responsible Citizen
Beena Ammanath
Trustworthy AI book author | Global Deloitte AI Institute leader | Humans For AI founder | AnitaB.org & Centre for Trustworthy Technology board member
#LeadershipInTheAgeOfAI
In this era of humans working with machines, being an effective leader with artificial intelligence (AI) takes a range of skills and activities. In this series , I provide an incisive roadmap for leadership in the age of AI.
For many businesses, AI is attractive because it affords efficiencies and capacity that drive the bottom line. Yet I believe the full potential of AI requires enterprise leaders to account for potentially negative outcomes, not just for the business but for everyone. The trustworthiness and ethics of AI depend in part on how it is used. As business leaders look for value-driving deployments and maximized business value, we also need to be mindful of the cascading impact an AI use case could have on society.
AI's Potential Impact On Society
This is a lesson learned repeatedly in the history of transformative technologies. To borrow from the cinematic gem Jurassic Park, we are sometimes so preoccupied with whether we could that we do not pause to consider whether we should. Within an organization, who asks the question, “What are the poor societal outcomes that could arise if we deploy this model?” While AI stakeholders are found throughout an organization, it is those in leadership who need to establish the importance of considering the broader impact AI can have on society.
Cautionary tales abound. For example, in the early days of the internet, organizations rushed in to capitalize on digital connectivity, sometimes with only a passing glance at the nascent idea of cybersecurity. We still grapple today with technology decisions that were made decades ago.
Now, organizations using AI to remove human bias from hiring decisions have discovered how bias in historical data can perpetuate the inequality the AI seeks to resolve. Facial recognition technology raises questions around fair treatment, privacy, safety and who should be permitted to use those systems and for which use case. AI-enabled chatbots that automate customer and citizen engagement have the potential to generate biased, inaccurate or even antagonistic outputs.
Social media platforms may monetize user data via targeted advertising, and one approach is fine-tuning recommendation algorithms for “attention retention.” Today, we know that a tangential result is the potential for social media platforms to create echo chambers and mental health risks.
领英推荐
Asking The Right Questions
Hindsight is 20/20, and some outcomes may not be obvious. Yet I believe the lesson is that when contending with powerful technologies at scale, business leaders should think as much about the near-term gains as they do the long-term impact of their programs. Now more than ever, let's scrutinize the models and use cases that are being developed to anticipate the implications for society.
? Equity And Opportunity: Public-facing tools need to work well for everyone. A public launch of an AI-enabled tool or program necessarily will impact people from different demographics, ethnicities and geographies. When choosing a tool, ask yourself what existing biases this tool could magnify. What inequities exist in different cohorts and communities, and will all people find the same access and value? More than just considering the tool’s utility, contemplate the longer-term effects of AI deployments that favor some groups over others.
? Environmental Responsibility: Sophisticated AI models that yield differentiating capabilities for your organization can require significant energy during training and testing and as the deployment scales. What does this mean for the organization’s carbon footprint and climate impact? Every enterprise determines its own threshold for environmental responsibility, so with AI, acknowledge and then assess the impact your AI endeavors will have on the planet.
? Truth And Transparency: When hyper-personalized text can be created in an instant with a Large Language Model (LLM), and when realistic images, videos and voices can be created en masse with the power of Generative AI, there is the potential that, at least in the digital realm, the broader society may lose confidence in what is true and real. What is your organization's responsibility in ensuring these tools are developed and used in a way that promotes transparency? If AI tools lead to a society that views technology as a source of fiction and deception, not only does that degrade public trust, it may also severely limit the grand potential of AI.
Public responsibility in business has always been important, but in light of AI, the stakes today are high. The long-term tangential consequences of AI will determine whether this is a technology that fulfills our wildest aspirations or one that is potentially perilous and untrustworthy. To make the right ethical, pro-social choices with AI, take the time to become informed and think through potential societal outcomes. It is possible to do right while doing well, and in the age of AI, leaders have an opportunity to steward this powerful technology toward its most valuable and beneficial use for all.
Originally published at https://www.forbes.com/sites/forbesbusinesscouncil/people/beenaammanath1/?sh=33cfc0b2bd3e
Regenerative Disciple and AI Visionary, Product Innovation and Strategy | Startups | Enterprise Architecture | New Product Design | AI for Good
5 个月Excellent summary - here is a fairly exhaustive look at how AI and Big Tech impact society and how the fines and penalties have little to no effect. We need to go beyond regulation and reset not only Big Tech, AI but computer science itself which is virus towards humanity and natural systems. Feel free to repost and any thoughts? https://www.dhirubhai.net/posts/djladouceur_future-computerscience-sustainability-activity-7205997963897110528-2Gg6?utm_source=share&utm_medium=member_desktop
I help non-tech professionals build confidence using AI at work
5 个月Beena, your article is a crucial reminder of the broader societal impacts of AI. Leadership in AI isn't just about technological advancement; it's about ethical stewardship. Would you suggest leaders ensure they have both an ethical AI framework and an AI Risk and Response framework that they can communicate internally and have clear action plans to respond to any breeches