AfrAId ?
A liberal arts perspective on why we should be more excited about Artificial Intelligence than be afraid of it displacing us in the workplace.
Humans have continually innovated ways of thriving by outsmarting all constraints that challenged their survival in nature’s cradle through intellectual advancements. As hunter-gatherers, humans collectively worked on foraging for natural resources to provide for themselves. Cooperating to hunt game was a daily necessity to survive each day as it came. Then came the agricultural revolution, ushering us into an era in which humans worked with animals and plants. Growing crops, rearing cattle and milking cows provided sustained nutrition to cohesive familial units fulfilling not just the day’s needs but also ensuring future prospects. The invention of machines paved the way for the industrial revolution further evolving social dynamics, allowing humans to unite in factories to work on machines. Increased human productivity enabled society to attend to healthcare, education, nutrition and innovation like never before.
Today our society seems to be transitioning into an era of digitization spearheaded by prospects of Artificial Intelligence (AI) which is touted to have a similar revolutionary effect on the progress of our civilization. An era of Humans working not on machine but working with machines could empower our society to maximize its intellectual potential, thereby making a plethora of problems currently plaguing us a thing of the past. There is much enthusiasm in expectation of advancements towards a more comfortable, resourceful, futuristic society. But contrary to the cacophony of cheers are some glaringly audible skeptics voicing concerns over consequent massive social changes leading to job losses. They are afraid that these algorithms will surpass human capabilities and displace them in the workplace. In the broader realm of things, is it really justifiable to be worried about job losses?
Expert opinion...
Experts in industry have varying opinions on this matter. A major study across various industries by McKinsey Global Institute states that many of today’s work activities could be automated by 2055. In the United States, these make up 51 percent of activities in the economy accounting for almost $2.7 trillion in wages[1]. Jack Ma of Alibaba believes that AI will benefit humanity overall, but the transition into a more digitized economy could be difficult. Conversely, an OECD report from 2016 claims that only 9% of jobs in the US will be susceptible to automation[2]. Mark Zuckerberg, Sundar Pichai and Satya Nadella are all excited for what’s to come, with the google honcho proposing that AI will be more important to humans than fire and electricity.
In academia, Economists aren’t particularly pessimistic about AI’s impact on the economy either. Given the infancy of the AI industry, there is less empirical evidence at hand to make confident statistic measures but economic research into predicting its evolution in industry takes a more elaborate and holistic approach taking technical, social and economic factors into account. Economists define a job as a specific bundle of activities[3] and claim that for the foreseeable future artificial intelligence may displace humans only in the mundane parts of their bundle. Economists of The American Economic Association investigated the impact of automation on each of the thousand jobs listed on the database of the Department of Labor, US analyzing the extent to which AI can fully automate the list of abilities required for each job and found little evidence between complete automation of industrial tasks and hence job losses[4]. In a separate study, authors using a similar methodology concluded that the wave of automation would metamorphose the structural organization of jobs in relation to the firm, that while there may be specific aspects of the job that could be displaced by AI, there is very little chance of any job being completely automated. In order to apply AI, firms will need to reorganize and reengineer the allocation of tasks and processes to workers within the industrial framework[5].
Another factor is the timeline of adoption. Revolutionary technologies that changed society like electricity took few decades to diffuse across all industries and households. If the applications of artificial intelligence are really as ubiquitous then it too could take a long time to become mainstream. An AI centric approach to processes would take time to diffuse across various firms in various industries as well as percolate down from advanced economies to the underdeveloped. Therefore, any job losses as a result of massive systemic change would be spread over time. For instance, there were about 3.5 million truck, bus and taxi drivers in 2015 in the world, and if we consider autonomous cars to displace all of them over a period of 15 years then you would have 19,000 job losses a month compared with 5.3 million jobs created by the economy every month which provides a comfortable cushion[6]. Additionally, the US is suffering from a low supply of truck drivers, with more old ones retiring than young new entrants joining. According to Derek Leathers, CEO of trucking company Werner enterprises, wages for truck drivers have hiked up prices by 15% over the last two years affirming the effect of the industry’s demographic challenges[7]. Moreover, companies ranging from Hasbro and Amazon to Coke and Kellogg complain of rising trucking costs which could lead to rising prices. Therefore, everything that is around you, tables, household items, electronics, etc. have all been on a truck and so could become more expensive as a result. Thus, autonomous driving is probably what the trucking industry needs to sustain and continue being the bedrock of domestic trade.
Potential Applications...
AI also has vast applications in healthcare, including in radiology wherein AI models have surpassed human performance in analyzing MRI reports. But even if radiologists are replaced by machines they would have enough time to accustom their skillsets to the changing economy and contribute to healthcare more substantially than before. For example, the Association of American Medical Colleges projects that by 2025 there will be a shortfall of between 40,000 and 90,000 doctors[8]. Once the activities of radiology are covered by reliable and tested AI algorithms requiring relatively less human effort, policymakers and universities could encourage more students to pursue medical areas in dire need of human intellect like cancer research. On the other hand, a radiologist could focus his strength of ‘real intelligence’ on ground breaking research in related areas like nuclear medicine and safety radiation. Maybe someday into the future, the same radiologist could be cured by new techniques in chemotherapy that were possible due to an empowered healthcare industry that made strides in cancer research only after the adoption of Artificial intelligence on an enormous scale.
Similarly, in education we expect administrative, mundane tasks that take out teachers’ precious time to be automated so that they are free to give more personal attention to their students. Furthermore, Princeton university will be conducting an experiment on students learning from a video lecture while exposed to an MRI analyzing their brains[9]. This will provide data on how students learn and help policymakers to better optimize AI techniques on a more personalized level to teach future generations. Coincidentally, these advancements are themselves aligned with expected govt. policies in the face of a technological revolution. Providing more applicable skillsets for the future through quality education was one of the priorities advocated by the Council of Economic Advisers in their report on Artificial intelligence and the Economy in 2016. Using superior AI to enhance human learning could better equip children poised to enter the workforce in a more digitized environment.
The Bigger Picture...
The economy has historically been adept at providing for its stakeholders in the context of a technological revolution; the industrial revolution was just as beneficial for the Luddites, helping elevate every Englishman’s quality of life and employment and then dispersing across the world. Therefore, the real benefits of AI to society lie not in the industrial work it performs better, but in how good these techniques are in elevating our standards of living within the framework our socioeconomic values. Private companies in the health care industry would push more resources into cancer research once AI algorithms aid in radiology whereas a responsible government would ensure that public schools and colleges reap the benefits of artificial intelligence for bringing up the next generations so that none of them needs to fill in for becoming a truck driver, rather they all apply their intelligence into more creative tasks for society.
Massive research in harnessing the power of artificial intelligence are underway around the globe. The pharmaceutical industry has emerged as a leader in investing its large R&D budgets into predictive AI solutions, which lower R&D costs in the long run saving millions, chiefly by forecasting medical trial outcomes. In the healthcare industry, Deep learning models showcased at the most recent Google I/O could detect diabetic retinopathy, a disease hard to catch in most parts of the world with few doctors. Down the road, similar technology could be applied to areas in engineering such as drone imagery and 3-D-generated models to assess issues with quality control, such as defects in execution and early detection of critical events. Artificial Intelligence is expected to herald an era of universal access to manufacturing, medical, education and infrastructure facilities empowering humans like never before. Nevertheless, public outcry stemming from the fear of the unknown is understandable. And in order to understand where we are going, we should look back at where we have we come from. To better place the mysteries of AI in the broader realm of the evolution of human civilization, we should learn from our own history.
Where we have come from...
The discovery of fire was an event of technological significance, allowing our ancestors to secure themselves from animals as well as cook raw meat and improve their daily nutrition[10]. Fire empowered the human body to focus more on consuming better nutrition and mental development than on the need for spending hours hunting game and consuming raw meat, thereby ushering in the cognitive revolution that gave us our superior intellect. The agricultural revolution was another technological leap. While fire led to neuroscientific development, the effect of agriculture was more sociological. Humans began forming complex networks through superior cognitive skills thereby laying the foundation to the structured, cohesive and now globalized modern human society. Perhaps we humans yet again find ourselves at a point of inflection and artificial intelligence could lead us into the next stage of human society’s evolutionary journey.
References:
[1] McKinsey Global Institute: A Future That Works, Executive summary
[2] OECD The Risk of Automation for Jobs in OECD Countries
[3] What Can Machines Learn and What Does It Mean for Occupations and the Economy; Brynjolfsson, Mitchell, Rock, 2018
[4] A Method to Link Advances in Artificial Intelligence to Occupational Abilities; Felten, Taj, Robert, 2018
[5] What Can Machines Learn and What Does It Mean for Occupations and the Economy; Brynjolfsson, Mitchell, Rock, 2018
[6] ‘Public Policy in an AI Economy’, University of Chicago Booth School of Business, Austan Goolsbee
[7] Driver Shortage sends truck haulage rates higher, Financial Times, Dec, 2017
[8] ‘A doctor shortage? Let’s take a closer look.’, NY Times, Nov 2016.
[9] The role of education in AI (and vice versa), McKinsey&Company, April, 2018.
[10] ‘Food for Thought: Did the First Cooked Meals Help Fuel the Dramatic Evolutionary Expansion of the Human Brain?’, Ann Gibbons