Top 3 Difficulties Businesses Face When Adopting Artificial Intelligence

Top 3 Difficulties Businesses Face When Adopting Artificial Intelligence

Artificial intelligence is currently the most powerful digital transformation technology that can significantly improve business efficiency. When implementing AI tools, companies often face a number of barriers that not all managers are able to cope with due to fears, stereotypes and perceived associated risks.

AI makes it possible to automate more business processes than with previously available technologies. Even hard-to-formalize employee decisions can be automated with smart algorithms. This aids in optimizing processes within the existing business model, as well in launching qualitatively new ones.

However, companies looking to take advantage of artificial intelligence face a number of barriers.

Resistance from employees

While AI can improve the efficiency and transparency of business processes and solutions, it is not always seen that way by employees.

Automation with the help of AI is perceived by society as a technology that makes people unnecessary, depriving them of their professions. This is not true. In contrast to the popular belief that AI will surpass humans, in my opinion, and based on my personal experience of more than 50 AI projects, hybrid intelligence is the most likely trajectory for the development of AI and humanity. In this case, man and machine will work in symbiosis, complementing each other. AI replaces routine and mundane operations, given humans more opportunities to engage in creative activities. In other words, AI can take care of the boring tasks while making human jobs more interesting and engaging.

An example of negative behavior is the reluctance to implement BI (Business Intelligence) tools and basic analytics, as this introduces “extra” transparency, which can expose business inefficiencies, and forces management tomake decisions based on data, versus human factors and intuition. The latter can be highly influenced by “office politics” and employees’ personal motivations, rather than the interests of the company.

Recommendations

  1. The management and the HR team must develop and implement a system of employee motivation so that the career goals of employees coincide with the goals of the company and digital transformation, otherwise the above-mentioned conflict of interest may arise
  2. Employees should be able to present their successful projects to a team of top managers in order to receive additional funding and scale these projects within the company
  3. The corporate culture should be conducive to employees’ self-expression and creative thinking, including devising solutions that increase business efficiency, introduce new capabilities, and cut costs
  4. When attractive external solutions are found, the task of the digital leader is to ensure that these decisions are objectively considered by teams through A / B testing and other relevant methods

Lack of attention to cybersecurity

Even when AI processes and company culture are perfectly aligned, standards are fixed, employees are motivated, there is always a risk of force majeure. Such force majeure events may include cyber attacks and data leakage.

As an example from the financial industry, according to the Identity Theft Report Center, the number of data leaks in 2018 reached 1,244 and affected 64.4 million credit cards. Among the victims were such well-known companies as Home Depot (2014), Equifax (2017), and Capital One (2019).

First of all, data leaks negatively affect the reputation of companies – according to KPMG’s analysis, after the leak, 19% of Home Depot customers said they would end their relationship with the company. Also, do not forget that companies must pay multimillion-dollar compensation to affected customers, payment systems and banks, as well as potential regulatory penalties

It turns out that reputational and financial losses can force companies to be too careful and conservative, which, in turn, slows down the digital transformation in general.

Thus, after the leak, it took one of our clients from the financial sector more than 6 months to provide access to even minimally-useful anonymized data. Another client’s IT department denied data access to our external team of consultants even without any previous leaks occurring, and we were forced to develop analysis algorithms based on synthetic data. Also, after a leak, government regulators can impose special restrictions data access, which will complicate the process of forming AI teams, as they generally need data access to properly train AI models.

Recommendations

  1. Companies must continually invest in cybersecurity by conducting vulnerability analyzes on client products and enterprise systems
  2. Moreover, it is worth introducing standards for data access, limiting it, following the principle of “the minimum possible authority to complete the task, but without prejudice to the workflow,” and save audit logs on data access so that potential culprits can be identified in case of a leak

Lack of readiness for the peculiarities of AI projects

Despite the advantages over traditional software, the implementation of AI presents several challenges that IT departments are not always ready for. For example, conducting QA testing on AI-driven applications is an order of magnitude more difficult than QA of “traditional” ones. In addition to traditional tests, such as unit tests, system tests, integration tests, performance tests and security tests, AI testing includes a statistical assessment of a machine learning model on test data, exploratory use case analysis to identify potential bias of data sampling or interpretation of decision-making logic.

Additionally, companies need to think about ways to protect their AI systems from attackers who are constantly trying to find blind spots in the logic of the AI and the data that was used to train it. This is particularly relevant to AI-driven algorithms that are utilized to identify potential fraud.

This makes an “AI-capable” QA team one of the most important assets in the organization’s adoption of AI in business. Unfortunately, today there are very few companies that truly understand this. Thus, even Tesla, which has managed to assemble a world-class AI team, has had situations where AI flaws lead to negative consequences. As a result, customers are more cautious about the company products (in the case of Tesla, notable the “autopilot” features), which hinders business growth and indirectly slows down the adoption of AI, as businesses collect less data and feedback from users.

Recommendations

  1. The company must establish clear acceptance criteria and testing standards for intelligent systems so that it understands the scope of AI at any given time and detects defects early in product development
  2. Defects found during testing can be either unacceptable or acceptable (e.g. the AI functions acceptably well, but far from perfectly). In the latter case, it is important to convey these nuances to customers so as to baseline expectations (e.g. the autopilot doesn’t fully replace a human driver, so don’t sleep behind the wheel)
  3. Non-critical defects of AI algorithms can be compensated for by special interface solutions, such as showing contextual hints or a % level of confidence in the AI system’s output

To summarize, I would like to add that despite all the difficulties of AI-driven projects, businesses should start testing AI solutions and platforms as early as possible. The difficulties and barriers that arise in the process are typical for most digital transformation projects, and the benefits that AI brings fully cover these difficulties in the long term with a correct AI implementation strategy and project portfolio.

I hope that the recommendations I have listed will help companies move faster and successfully overcome barriers to innovation.

要查看或添加评论,请登录

Ilya Filinykh的更多文章

社区洞察

其他会员也浏览了