The state of AI in 2020 and Beyond…….
Tulsi Vuppu
Advising businesses to accelerate AI ROI|Data & Innovation | CDO/CIO/CTO Advisory |Head of Data and Architecture |Chief Architect| Enterprise Architect |Architectural Consultancy| Responsible AI | AI ethics| Industry 4.0
AI adoption and impact
As we all know, COVID-19 has rapidly moved consumers and businesses to digital channels. Throughout the pandemic, we’ve seen Global organizations across sectors adopting and scaling AI and analytics much more rapidly than they previously thought possible. Previously, we only had to design products that worked. Now, we have to go a step further when designing AI-based products, and we must consider the human emotional factors in order to make these products more adoptable.
In the past, AI served as a backup tool that helped humans automate simple tasks, meaning many organizations were not utilizing the full potential of AI systems. In many instances, humans still make up to 80% of decision-making with AI helping humans for the remaining 20%, often on the simplest tasks. In the future, this will reverse. AI will cover 80% of decision-making with humans handling the remaining 20% that involves more difficult reasoning, decisions with higher stakes involved, or simply new and never before seen situations.
The AI industry has already explored many adoption strategies. One example is explainability. Today, AI faces the “black box” problem. While we are able to see the results and outputs that AI helps produce, it is often unclear how AI makes certain conclusions. In the future, we’ll see a greater attempt to make this “black box” more transparent, giving us a more explainable AI that is easier to adopt. Other examples include more empathic AI, more accountable AI, and more ethical AI — all trends that will continue to develop to make AI more adoptable in the workplace.
One of the most remarkable patterns we see in these findings is the adoption of core practices among companies capturing value from AI. There really is a playbook for success. It’s encouraging to see a larger proportion of organizations this year doing more in foundational areas, but many still are not. We see companies, for example, still spending disproportionate time cleaning and integrating data, not following standard protocols to build AI tools, or running “shiny object” analyses not tied to business value.
It’s also striking that some of the biggest gaps between AI high performers and others aren’t only in technical areas, such as using complex AI-modeling techniques, but also in the human aspects of AI, such as the alignment of senior executives around AI strategy and adoption of standard execution processes to scale AI across an organization.
A lack of model explainability presents a level of risk in nearly every industry. In some areas, like healthcare, the stakes are particularly high when AI could be presenting a recommendation for patient care. In financial services, regulators may need to know why an organization is making particular decisions—on lending, for example. But explainability can present another risk: lack of AI adoption, leading to wasted investment and the risk of falling behind the competition.
Finally, many executives now realize that AI solutions typically need to be developed or adapted in close collaboration with business users to address real business needs and enable adoption, scale, and real value creation. In the upcoming year, we can expect enterprises to make a clear drive to better address the human emotions and interactions as a way to improve AI adoption in business. We will also see humans work more closely with AI systems than before, which will result in more human-like AI that is more explainable, empathic, accountable and ethical. If enterprises want to tap the full potential of AI, they will need to embrace these changes. In addition to driving business efficiency, these shifts will ultimately provide better customer experiences.
The Takeaway: AI adoption is proceeding rapidly. Most companies that were evaluating or experimenting with AI are now using it in production deployments. It’s still early, but companies need to do more to put their AI efforts on solid ground. Whether it’s controlling for common risk factors—bias in model development, missing or poorly conditioned data, the tendency of models to degrade in production—or no formal processes to promote data governance, adopters will have their work cut out for them as they work to establish reliable AI production lines.