Transparency in AI Adoption
Lanré Oyewole
Architecture | Digital Transformation | Gen-AI | C-Level Executive Support | #Communication | Technology #Simplification | Cloud | IPaaS | TOGAF? | ITIL4 | #Leadership | #Mentorship
Transparency of your AI vision, strategy, use and journey are existential imperatives. That is probably more true for corporates than for persons. Although I would say that every person that is AI aware, including AI sceptics, should have a strategy for engaging with what will soon be a ubiquitous presence. However, the focus here is on corporates, and I advocate that every organisation must begin, or continue, to evolve a vision, strategy and roadmap for AI adoption.
In a recent post, I wrote about the costs and damage of a lack of transparency to groups, businesses and countries. With AI, opacity is not just harmful, it is dangerous! Why? I will break this into three very broad dimensions: culture, governance, others. I will dwell mostly on culture, touching lightly on the others, as I believe culture is foundational.
Culture
The first and most important is identity. What do I mean by that? Let's start with a question. How would you describe your organisation today, in one sentence? As you introduce AI, change to that description is a real possibility. Before you set out, it is essential to know, define and establish guardrails around those key attributes. Without clear and open communication, this will be difficult, if not impossible.
Next is trust. As you embark on the journey of transformation with AI, it is vital that employees and other stakeholders, have clarity on the destination, rather than just a drip-feed of last minute details of where we are in the journey, as they may feel uncertain about how AI decisions are made and how those decisions will affect them.
Resistance to change follows on from trust. If the trust of staff has not been cultivated over time, or if that trust has been undermined by the manner in which the AI agenda has been introduced. Cooperation could be a challenge, as people will default to the status quo when faced with uncertainties and risks to their individual bottom-lines. Without clear and honest communication about the vision, employees may resist adopting AI technologies, fearing job displacement, role conflation, social erosion or value dilution.
The fears and resistance could be connected with ethical concerns that employees may have about how any HR changes may be administered. For example, will any redundancies, role redefinition or other changes be managed by humans with context knowledge and empathy, or relegated to algorithms in some fancy AI. Another could be that the leadership might be using AI to cover up on their biases, or as an excuse to undercut privacy protections while driving through with unpopular agendas. It is important that the ethical underpinnings of your vision and strategy are plain to see, addressing not just external and organisational, but also individual concerns.
Governance
As with a lot of change, buy-in at the board or management level is important. In addition, it is important to ensure alignment and coordination throughout the organisation. There are two key strands: the value proposition for AI and the draft roadmap. I say draft, because the technology is evolving as we speak. Periodic reviews and tweaks may be necessary. However, without a clear vision, strategy and roadmap. The strategy adopted by different units within the organisation may be out of sync, or conflicting. Both of which can lead to inefficiencies and wasted resources.
领英推荐
This decision-making process, outside of the sponsors of adoption, will rely on insight of the strategy and the roadmap. Opacity here could result in poor decision-making. Leaders at various levels within the organisation, having visibility only of a vision, however carefully crafted, may not have a sound understanding of AI's role and potential impact or the side effects of their decisions. One should not forget that those leaders, depending on how close they are to frontline staff, will also be dealing more directly with the culture challenges that we outlined earlier. Without the support of colleagues, the anticipated benefits, including innovation, may fail to materialise, as the shroud on direction can translate into a cloud on motivation.
One cost or benefit of transparency, depending on your perspective, is accountability. Where the aspects of AI adoption are clearly communicated, a wide community of stakeholders, within and external to the organisation, can and may hold the leadership to account. This could be as early as the visioning stage, or later as the strategy and roadmap emerge. Some will see this as a major obstacle to the pace of transportation and an invitation to a 'ground hog day'. I aver that it helps the organisation to fail fast/early, if the foundations are wrong. In addition, these challenges should help to weed out weaknesses and reaffirm strengths. As the dynamics of adoption play out, it also helps to track progress via outcomes that align with the direction of travel.
Others
Generally speaking, AI adoption is not penny-cheap, at least not at the outset, although it should introduce efficiencies and/or increased productivity later on. Key stakeholders, especially sponsors, investors and end customers, will therefore have expectations from this [potentially] expensive change. It can be expected that they will desire, or demand, insights into how AI is being used and its impact, for example regarding privacy, bias, employee rights or the environment. These can have a huge impact on public perception and affect investors' assets and the allure of the company's products/services to the public.
The final piece that I want to touch on is regulatory-compliance. Even if your employees are mute, investors are indifferent, and customers couldn't care less, the government, industry, profession, or other body, may require certain disclosures from organisations using AI in certain contexts/ways. In this case, organisations may not actually be able to start out without said disclosure, or risk serious sanctions later.
In all of these areas, transparency, whether proactive or reactive, ultimately serve the long term interests of your organisation. No one person/group/organisation/nation has all the answers on AI. Therefore, keeping a relatively open hand is a safer bet, given the huge risks in taking the wrong turn versus the potential benefits of cross-pollination. As I start to round up on a newsletter on Solution Architecture, this marks a first deposit in a shift of focus towards AI and its ethical, responsible and strategic adoption within organisations. This is because I see AI as a reality that we will all have to grapple with, sooner rather than later.
In all of this, be reassured, because we speak business, first! Just like the software, applications, infrastructure, etc. that you are already familiar, we see AI as another enabler and differentiator for your business. This is a great time to have a think about your context; what are the options, implications, opportunities and risks; what a draft vision would look like, and what the journey (organisational and technical) could be.
Let's talk.