Artificial Intelligence, Business and Society, Interview with Dr. Theodoros Evgeniou, Professor at INSEAD and Visiting Prof., MBA International, AUEB.

Artificial Intelligence, Business and Society, Interview with Dr. Theodoros Evgeniou, Professor at INSEAD and Visiting Prof., MBA International, AUEB.

As AI innovations continue to surprise everyone, many questions about these technologies are in the minds of people: from governments to board rooms, executive meetings, schools and even casual social settings.? We asked our AUEB MBA International candidates to formulate some questions that are on their minds about AI and shared them with Professor Evgeniou for his reactions. We include below his answers to some of the top questions asked by the MBA students.

How does AI work technically? In other words, what are the fundamental mathematical principles behind AI and how do they translate to practical and effective outputs usable for managers and decision-makers?

While for the first few decades since the term AI has been introduced (for those interested in the history or AI, I have recently written a short article on this topic1) there were mainly roughly two key approaches, one along the lines of so-called symbolic AI and one based on machine learning and data, starting in the mid 1990s AI has been increasingly based on the latter. Briefly, think of the former as, for example, rule-based systems or algorithms that search for an optimal solution to follow. For example, a bank may have created a collection of rules that are used to determine whether someone may be approved or not for credit. These are often created by the bank’s employees. If one automates the process of using the rules to decide whether to grant credit to an applicant, this automation can be seen as an AI system that automatically makes credit decisions – and it is an AI system, indeed. Companies widely use such rule-based systems for many applications, and arguably should not shy away from doing do as this type of AI can create value and is relatively easy to implement.

However, while the rules in the case above may be “intelligent”, the “intelligence” is arguably in the mind of the bank’s managers who determine the rules. What if these rules could be determined algorithmically based on past data from previous applicants, some of which may have defaulted hence with hindsight given credit although maybe should not be given credit? What if we write an algorithm that analyses past data of applicants and identifies the best rules that can explain which applicants usually default? A system that combines both such an algorithm and the rules identified (“learned from the data”) is, arguably, more intelligent than the previous one as it not only uses intelligent rules (assuming it is the case) to automate credit decisions but also “learns” these rules. This is what machine learning based AI achieves. And much like the previous version of AI, not only it can automate – or support – decision making, in this case loan approvals, but can also continuously improve and adapt to the environment it operates in. Note that, unlike any previous technology we developed, AI can make – or recommend – increasingly complex decisions in increasingly complex environments. This is why AI is mostly used in practice.

In general, such an algorithm that “learns rules that best explain data”, which is what is called a machine learning algorithm, can learn not only rules but also more complex mathematical functions that can perform more complex tasks. For example, today’s so-called deep learning methods are machine learning algorithms that learn/identify highly nonlinear functions that, given some (even very high dimensional, as for images or text) input data such as the characteristics of a credit applicant can decide (or recommend) whether to give credit based on, say, predicting how likely it is for this applicant to default. Exactly as the rule-based ones, but instead of simple rules the decision is based on complex functions of the characteristics of an applicant. Today, such more “complex functions” based machine learning algorithms are powerful enough to also be used for applications such as classifying images for the purpose of detecting objects, predicting the next most likely word based on previous ones in a way that AI can generate text that looks like that written by people, generate new synthetic/”deepfake” images, sound, and even video, etc. It is about selecting (so-called “learning” or “estimating”) complex functions based on data that can be used to perform such sophisticated tasks.

There is a lot of hype around AI, but most users of different tools will flag that there is no depth and lots of inaccuracies in the results AI produces, e.g., ChatGPT. These tools can lead to fake news, or even worse to a world where you won’t be able to distinguish truth in what you see, read or hear. How can we develop tools that are sustainable and are not just for short-term, and sometimes doubtful, gains?

Indeed, it is important to always keep in mind that data driven machine learning AI methods are never 100% accurate. They are statistical tools. This is the case for all machine learning AI, not only chatGPT.? This will be the case no matter what we do. This is why it is critical that we put in place processes to continuously monitor the behavior of AI systems, manage so-called AI incidents, and, when it comes to generative AI such as Large Language Models, always be careful not to take what they “say” as being correct. In general, it is critical that we focus not only on AI but on the “AI+user” – or, as some say, the humans+machines” – system2. The question is not how accurate, biased, legal, or robust the output of an AI is, but how the output of the overall system behaves. Whether the system is one-user-and-one-AI or an AI embedded in a business process that involves multiple components and stakeholders, or other.

It is about managing the entire lifecycle of the humans-and-machines systems we build, from development and testing, to deployment, monitoring, updating, and finally retiring the AI when, for example, our risk management processes identify behaviors that are outside the acceptable range we determine. AI risk management is a core capability that organizations need to build.

In more practical terms, directly applied to the use of AI in the jobs and responsibilities of managers, how can we ensure that AI systems recognize and use good quality data to deliver optimized results, and are there any strategies to control the integrity of the data used by the AI systems?

This is absolutely critical. As often said, Garbage In–Garbage Out (GIGO). Moreover, in general the data capabilities of organizations (and nations) are as important, and often more important (as, in addition to AI, one can develop many other so-called data products and services), than the broader AI capabilities. Even at the European AI strategy level, the data related regulations and strategy are arguably very important in determining the future of AI in Europe – not only the more widely discussed AI strategy and regulations such as the EU AI Act.

How does one ensure data quality, but also availability, privacy, security, etc.? A key concept widely seen in the corporate world is that of data governance. This is a very rich area that one should learn about. There are also various frameworks that focus specifically on the topic of data quality management3?as well as what one may call the data supply chain4.

Related to the previous two questions, and to your recent speech and interview at the EmTechEurope conference in Athens, you focused on the need for the regulation of AI by humans instead of self-regulation. Is it correct to assume, as it is derived from ongoing conversations and media, that all agree about the need for regulations, but we do not agree on how? Is there an ongoing disagreement in the research community? And more importantly, how do you propose that we move forward collectively, as governments, professional organizations, EU, or other institutions, to do that? Who should be the fiduciary of humanity? What role do the private companies that develop AI play in this context?

There is general agreement that AI risks exist – nobody can deny that. To begin with, there are safety risks with AI embedded products such as autonomous vehicles. Unfortunately, this simple fact is lost in ideological and emotional debates, particularly on social media.

The main question is not whether there are AI risks that need to be managed, but what they are and how to manage them. The good news is that managing risks is a centuries old challenge, and we do not need to reinvent the wheel. For example, there is a lot to learn about AI risk management from the rich area of financial risk management. We recently wrote a short article on this topic5. For example, risk management requires multiple so-called “lines of defense”. Both inside a company – for example ensuring those managing the development of AI (the “management”) follow so-called good machine learning practices6, there is an independent internal risk management and compliance business unit, and then there are internal and external auditors – as well as more broadly, with regulators, auditors, and internal risk management in place. There also multiple instruments, covering for example so-called voluntary codes of conduct to laws.

What risk management practices are used for each case is a matter of how big the risk may be. One does not need to build heavy processes for all AI applications, that would be detrimental for innovation. In some cases, self-regulation is enough. In others, it is not.? Moreover, it is important to remember that there is no such thing as a “perfect regulation”. What is key is to continuously improve the regulations. The goal is to have regulations that support innovation, which always also requires safety and trust as otherwise adoption will fail. It is not about risk management or regulations versus innovation, it is about risk management and regulations?to support?innovations that are widely – and safely – adopted. That is the balance we shall aim for.

It is also very important to understand that there are already many “AI regulations” – except they are not called so. All AI embedded products must comply with existing product safety regulations, for example. In fact, many AI applications will probably be regulated not based on “AI regulations” such as the upcoming EU AI Act, but by existing industry/product specific regulations. In that sense, the debate about whether to regulate AI is misleading – and simply wrong. AI is already regulated.

Regarding self-regulation – of corporations – an important point to make is that this cannot work unless there is also strong corporate governance. The recent drama at OpenAI, with the removal from the board – at least for now – of even the founders of the company, is a strong reminder of the fragility of corporate governance7. Can or should we trust companies developing technologies that can have significant impact to self-regulate when they cannot even properly govern themselves?

How do you see future CEOs and CFOs responding to dynamic technological and economic changes, particularly in terms of AI and data-driven decision-making, and what skills do you believe executives will need to succeed in this environment?

First, it is important to realize that given the speed of technological innovation, executives need to have a mindset of continuous innovation, adaptation, and organizational change. This requires building organizational capabilities that are based, for example, on continuous upskilling of the workforce, agile technological infrastructure, and strong AI governance. The last one is critical when it comes to leveraging AI innovations. AI governance covers, for example, putting in place organizational structures, teams, and processes to identify new AI business opportunities, understanding their potential to create value but also expose the company to new risks, properly prioritizing AI initiatives, managing business risks due to AI, and ensuring compliance and proper governance in general as noted above – and a lot more.

Regarding skills, we recently did a study based on interviews of C-suite executives in more than 30 companies globally, ranging from large “traditional” ones such as Morgan Stanley, General Motors, LVMH, or Danone, to smaller digital natives such as Deliveroo, or larger digital natives such as Microsoft. The main skills that have been highlighted for senior executives – note that this is not the same as for recent MBA graduates – are along the lines of the ability to understand what is possible with AI, prioritize AI investments, communicate to the organization the potential of AI, setup a strong AI governance, manage organizational change to ensure successful AI adoption, but also to inspire as well as to keep in touch with the latest AI developments and business innovations and be aware of AI risks and regulations as those evolve. Regarding mid-career managers and MBA graduates, we also wrote an article highlighting skills that was part of a book launched in Davos a couple of years ago8. Perhaps worth reading.

Finally, about two years ago we completed with a team at the World Economic Forum a rich and practical toolkit, the World Economic Forum AI C-Suite Toolkit9, that provides further insights not only on what skills are necessary but also on other aspects executives need to consider to best leverage AI – to maximize the value from AI while minimizing the risks.

How do you anticipate the expansion of the use of AI in various industries over the next years and how do you think AI education, such as the elective course in the i-MBA “Business Decision-Making Using AI”, will help professionals prepare for the changing landscape?

The answer to this question is simple: First, AI will be part of increasingly more business processes, products, and services across increasingly more industries. This is driven not by some ideology about AI, but simply because of the enormous potential of AI to create business – and socioeconomic – value.

Second, this elective is developed exactly to better prepare MBAs for this world. Moreover, a particular characteristic of this course is that it is also built with the philosophy of connecting MBAs with experienced executives abroad to exchange about how AI is best leveraged today in global leading organizations. Sessions have executive guests who join the class from the US, Europe, and other parts of the world.

Connecting managers and executives abroad with students in Greece is something I wish to do in general. This course is one example of how to do so. I am very excited to teach in this elective and I look forward to supporting in any way I can the development of AI related capabilities in Greece!

How can AI impact the further development of the career of an MBA student? Will AI be able to provide insights somehow on optimal career paths for different business professionals? Maybe evaluate different career paths and scenarios and thus assisting in making the best choice?

First, AI will impact careers as it will in general impact the nature of jobs and industries. MBAs need to have a mindset of continuous learning and adaptation. Moreover, they need to learn how to best leverage AI not only for the success of their organizations – for example in terms of AI governance and strategy – but also to improve their own jobs. Research shows how appropriately using AI can in fact significantly boost the productivity and quality of work10.

Regarding leveraging AI to choose optimal career paths, I would expect this is possible much like it is possible to leverage AI for other types of decisions: AI can, for example, help gain insights on markets and trends much like it helps companies today to do so when, say, they develop and launch new products or services. However, when it comes to using generative AI, such as LLMs, for this purpose, one needs to always keep in mind that their “advice” may be simply wrong. It takes some expertise to judge the quality of AI output and to best leverage it.

One should be careful not to be wrongly influenced by AI generated content or even an AI based analysis of markets. Low tech “methods”, based on general awareness of the markets and trends, exchanging with experienced people, and good “old” search online are indispensable. Nothing can replace awareness of what is happening in the world!

We warmly thank Professor Evgeniou for sharing his expertise and insights and again welcome him co-teaching the new elective course “Business Decision-Making Using AI” in the MBA International!

References

1?“Part I: A Very Brief Technical History of AI”, LinkedIn Article available at?https://www.dhirubhai.net/pulse/part-i-very-brief-technical-history-ai-theodoros-evgeniou-tipfe/?trackingId=uBsyvmpnRia%2BqTGRi8qVMg%3D%3D

2?“A better way to onboard AI”, Harvard Business Review, July-August 2020 – available online?https://hbr.org/2020/07/a-better-way-to-onboard-ai

3?“Data Driven: Profiting from Your Most Important Business Asset”, Thomas Redman, Harvard Business Review Press, 2008.

4 “Your Data Supply Chains are probably a Mess. Here’s how to Fix them”, HBR Digital, June 24, 2021. – available online??https://hbr.org/2021/06/data-management-is-a-supply-chain-problem

5?“Managing systemic risks in Tech: Lessons from Finance”, INSEAD Knowledge, August 29, 2023 – available online?https://knowledge.insead.edu/economics-finance/managing-systemic-risks-tech-lessons-finance

6?https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles

7?“OpenAI’s Crisis is yet another wake-up call”, INSEAD Knowledge, November 30, 2023 – available online?https://knowledge.insead.edu/leadership-organisations/openais-crisis-yet-another-wake-call

8?“New Skills for Augmenting Jobs and Enhancing Performance with AI”, Chapter 5, “The Global Talent Competitiveness Index, 2020” –? available online??https://www.insead.edu/sites/insead/files/assets/dept/fr/gtci/GTCI-2020-report.pdf

9?“Artificial Intelligence: What the C-Suite needs to know”, WEF Agenda, January 12, 2022 – available online?https://www.weforum.org/agenda/2022/01/artificial-intelligence-c-suite-business/

10?“Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality” – available online?https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321


要查看或添加评论,请登录

社区洞察

其他会员也浏览了