Artificial Intelligence: Implications and Considerations for Education, L&D, and the Future of Work
Image by Gerd Altmann from Pixabay

Artificial Intelligence: Implications and Considerations for Education, L&D, and the Future of Work

The image above is from the special Jeopardy! episode recorded in 2011 when two all-time Jeopardy! champions agreed to match up against IBM’s supercomputer known as Watson. Although it was not the first time artificial intelligence had been introduced to the world it was certainly one of the more memorable moments because Watson handedly defeated both former champs. In his answer to the final Jeopardy! question, contestant Ken Jennings humorously wrote “I for one welcome our new computer overlords”. The computer was able to answer questions faster and more accurately than either human counterpart. Later Jennings would go on to reveal, “I felt obsolete…I felt like a Detroit factory worker in the ‘80s seeing a robot that could now do his job on the assembly line. I felt like ‘Quiz Show Contestant’ was now the first job that had become obsolete under this new regime of thinking computers” (May, 2013).

Since the airing of that episode in 2011 a decade has passed and the use and sophistication of AI has only accelerated thanks to a number of factors including the amount of data we are all freely sharing on a daily basis through things such as social media, online shopping, virtual dating and the use of smartphones and other smart devices.

Whether people are aware of it or not we are living in the age of artificial intelligence (AI). From the driving directions that we follow on our phone to the shopping recommendations we get in our browsers or social media feeds AI is shaping how humans make decisions, including but not limited to what we buy, what we read, who we date, how we get around in the world and how we learn. Automation and artificial intelligence are reshaping the world. So, is Ken Jennings, right? Will these thinking machines come to be “our new computer overlords”? According to Burning Glass Technologies’ most recent report, “the automated economy rose 19% annually” (Sigelman et al., 2021, 10) over the past 5 years. Furthermore, an April 2020 report by OECD found that “recent research estimates that 14% of existing jobs could disappear as a result of automation in the next fifteen to twenty years, and another 32% are likely to change radically” (Vincent-Lancrin & van der Vlies, 2020).

By those numbers it seems that in-demand roles are quickly changing due to AI, but it is important to consider that as AI and automation continue to accelerate, they may be replacing jobs and at the same time creating new jobs or reshaping old ones. A recent study by MIT found that more jobs will be changed by automation than created or destroyed (Autor et al., 2021).

As this paper will explore there are many things that AI is much better at doing than humans. Ultimately, in education and L&D we need to consider what makes us uniquely human and what about human intelligence cannot be automated or replicated by AI. This means that educators in all sectors and industries need to consider new ways to educate, train, reskill and upskill people throughout their entire lives so that they are prepared to understand and adapt to the ways that AI is and will continue to change the world of work, and the world at large. Therefore, this paper will explore three critical questions: 1) What is artificial intelligence? 2) How is it impacting the world of work with a specific focus on education and L&D? 3) What are the ways that this technology can assist L&D and education, as well as the potential pitfalls of this technology and how to overcome them (if possible)?

Humans have been creating human automata for centuries. The “earliest human automata in literature are those described in the Iliad, xviii. 417 ff., in connection with the visit of Thetis to Hephaestus concerning the shield of Achilles. It is observed that these “handmaidens of gold” are endowed with intelligence” (Bruce, 512). In this brief passage the handmaidens of gold are described as having a unique intelligence to assist their masters. Although, the Iliad is a fictional tale this vignette demonstrates human’s fascination with artificial intelligence and automation long before we had the means to create such a thing, and just as important our desire to create things that would help us but not replace us.

Artificial Intelligence at its most basic level is when one explicitly programs a machine or system to do something thereby introducing some level of intelligence to it. The engineer introduces a set of rules for the machine to follow. The human must tell the machine what the rules are, and the machine does not deviate from those rules. Its intelligence is fixed. Take, for example, a modern washing machine that has a detect fill setting. The machine is programmed to be intelligent enough to understand when it has filled up with enough water to begin washing. This is a very basic form of artificial intelligence.

Taking this a step further there are many sub domains of AI, and each domain can be applied in various ways depending on the goal and task needed. Those domains include Machine Learning (ML), Deep Learning (DL), Neural Networks, Natural Language Processing, Computer Vision and Cognitive Computing.

Machine Learning (ML) is designed so that the machine or system can gain intelligence without explicit programming. As opposed to the last example of the washing machine with a fixed intelligence, a machine designed with ML is not given a full list of rules to make decisions; rather in ML the system learns on its own based upon patterns it finds in data. An example of this can be seen in Amazon’s shopping experience where AI learns your consumer habits based upon what you click on. Then using a huge amount of data, it adapts its suggestions to you based on what you have clicked on or purchased in the past. Similarly in education and L&D there has been a steady rise in adaptive learning which uses “digital adaptive tools…that can respond to a student’s interactions in real-time by automatically providing the student with individual support” (EdSurge, 15). The goal of this technology is to improve the learning experience and outcome for learners, as well as assist instructors to understand the needs of their learners and provide timely intervention for continuous support.?The 2021 MarketWatch report revealed that Machine Learning in education will continue to be a top investment priority in educational technology and that the global market end-use applications will include intelligent tutoring systems, virtual facilitators, content delivery systems, and interactive websites.

The potential limitation of this technology is that artificial intelligence is depending on predictive behaviors in the user. For a student or learner that is clicking on items and topics, and then using the adaptive resources that follow the AI works very well. The user is doing exactly what the system is expecting the user to do. However, if the user starts randomly clicking on items and not following the sequential, structured pattern then AI can get confused and not provide useful adaptive instruction to the learner. Therefore, human intervention and instruction will continue to be an important part of the learning experience. A teacher or L&D professional would still need to provide some level of support, instruction, and motivation to the learner to ensure the technology is being used appropriately by the learner.

Deep Learning and Neural networks were born when Machine Learning hit the limits of what it could successfully be programmed to accomplish. In general ML performs poorly when it comes to classifying audio, images, and unstructured data. Therefore, researchers began to explore how to program a machine to mimic what the human brain can do. Meaning it can adapt, learn, and respond to more human inputs such as audio, images and unstructured data. An example of this is how Google images will create suggested folders for organizing your pictures. It can recognize similar traits in your images and suggests that photos of a certain person or event be clustered together under titles such as “family”, “bikes” or “party”.

An important anecdote to cite here is that in 2015 Google was under major scrutiny after its artificial intelligence software mislabeled two Black people as “gorillas”. Demonstrating that AI is only as useful as the developers who create it. In this case, the developers unconscious bias led to the AI being trained on primarily White faces and much less frequently on faces of Black people. “Artificial intelligence expert Vivienne Ming said machine-learning systems often reflect biases in the real world. Some systems struggle to recognize non-white people because they were trained on Internet images which are overwhelmingly white” (Barr, 2015). This bias ended up being baked into the AI Google images and lead to terrible consequences. As we seek to use and understand AI this is an important story to keep in mind as we consider a future with responsible and ethical AI design.

While AI can lead to terrible forms of bias and discrimination as explored in the Google images anecdote, it can also lead to incredible innovation and exciting new ways to create a more equitable and accessible society. Natural Language Processing is the science of reading, understanding, and interpreting language by a machine. This type of AI can be seen in text to speech applications. An educational application of the technology has been used in Beijing Union University since 2016. They use “an intelligent speech recognition system that simultaneously converts teacher’s spoken language into text subtitles on a large screen. In the classroom, students with disabilities can follow the teaching through a multi-channel and multi-dimensional information input combining sign language, voice port, spoken language subtitle and text handout” (OECD, 9). This example of AI in the classroom demonstrates a scalable technology that could benefit students and professionals that need communication adapted to them in a variety of modalities.

In an educational setting, students with certain disabilities need classroom aids or other human intervention to participate and be successful in the classroom. This can sometimes prove challenging if there are not enough aids available for classroom intervention or not enough school funding to have the appropriate support in each classroom. However, an intelligent speech recognition system has the potential to alleviate some of those human capital challenges and give a larger number of students the support and access they need.

Computer Vision algorithms try to understand an image or video to automate visual tasks that humans do. There are several current applications of this in education and L&D, which include engagement detection in distance learning, learning management in physical schools, automated proctoring online exams and handwriting recognition (Yin, 2019). In the case of automated proctoring, using a webcam, internet connection and microphone the automated proctor is able to detect behaviors linked to cheating and flag suspicious activity, which can lead to the AI shutting down the exam.

The use of automated proctoring of online exams has risen drastically since the onset of COVID-19. Universities and professors all needed to shift to online classes and exams during the pandemic, and thus automated proctoring was the logical intervention for many. However, with the rise in use there has also been a rise in concerns regarding the ethical implications of this technology. The University of California, Santa Barbara, opposed the technology “citing overall student privacy concerns and those of undocumented students in particular, the campus’s Faculty Association Board wrote, “We recognize that in our collective race to adapt our coursework and delivery in good faith, there are trade-offs and unfortunate aspects of the migration online that we must accept. This is not one of them” (Flaherty, 2020). While the technology does have good intentions there are many that have concerns over privacy, and furthermore those that argue blaming the automated proctoring is missing the root problem. The real problem is how students are traditionally assessed.

Bill Fitzgerald, a privacy researcher at Consumer Reports, argues that “when the evaluation is predicated on the idea that the student is going to cheat and they’re going to be surveilled into compliance, that’s just not an acceptable starting point” (Flaherty, 2020). Rather educators should adopt more creative ways to assess student’s knowledge. Assessments should focus on students’ higher order of reasoning which tests their ability to not just memorize and regurgitate information, but rather use the information they are gaining to think critically, problem solve and apply information to larger concepts, thereby creating new knowledge. In many ways this is the direction that education needs to move in not only because it provides a richer learning experience, but because AI is better at memorizing and regurgitating information than humans. We can no longer teach humans to be like computers because computers are winning. We need to consider how to cultivate a uniquely human intelligence, which focuses on creativity, emotional intelligence, and working collaboratively with others.

Cognitive Computing is an algorithm that tries to mimic a human brain by analyzing text, speech, images and objects to give the desired output. This was most notably seen with IBM’s computer Watson that beat the human champions on the TV game show Jeopardy!. After seeing this application, a doctor approached IBM to turn Watson into an oncology adviser. Using all the information from the Memorial Sloan Kettering Hospital’s clinical trials they were able to program Watson to respond to oncologist’s questions thereby driving faster, data-drive treatment plans (Taneri, 2020). Here is yet another example of how AI can help assist professionals by analyzing and processing data much more quickly and efficiently than a human can. In this case the technology is used to greatly speed up the rate of care, thereby allowing the doctors to focus on the more human side of treatment such as managing patient’s emotional wellbeing, communicating in an empathic manner, and communicating with their team to ensure the highest level of care is administered. One can imagine a future where healthcare workers’ time is freed up due to the rise in reliable AI driven treatment plans thus allowing time for education that places a greater focus on training healthcare professionals in the critical human skills of empathy and communication.

While there is much positive potential in the utilization of AI it is important to keep in mind the limitations of this technology and consider the ethical implications of using this technology. Of particular importance will be the ongoing focus on “transparency, explainability and accountability of AI systems in education…[and] ensuring that use of AI systems to serve human-centered values in protecting and securing (personal) data” (Vincent-Lancrin & van der Vlies, 2020). Notably the rise and effectiveness of AI is due to the large amounts of data it now has access to and continues to be fed through multiple private and public channels including our browsers, cameras and videos, smart devices, security and traffic cameras and many other devices we use daily. Therefore, it is important to keep a keen eye on who has access to this data and how it is being used. The potential for bad actors and the abuse of private information is enormous. Consider for a moment the adaptive tools of machine learning that were explored earlier in this paper. The machine is making decisions about what and how you are going to learn based upon previous information that it is getting from you. We need to consider questions such as how long is that data stored, what could it be used for in the future and who might have access to it? If this AI system is helping people by adapting to how the learner best learns and providing personalized learning experiences there could be a future where that information could be used to make suggestions about continuing education, college admissions and career paths. At face value this seems a natural extension in how AI can assist us in making decisions, however, it is important for AI to remain as assistive technology and not determinate of what we do and how we do it. If anyone has ever used LinkedIn for job searching, we need look no further than some of the job suggestions that LinkedIn might send us that seem completely misaligned with our interests, experiences, or values. Likewise, an AI that can make suggestions for young students on career paths or colleges based on previous academic performance or interest may limit a student’s agency in the process of exploration. It is important that students and learners are not directed by AI, but rather continue to see it as a helping agent with suggestions that they can then critically consider and ultimately reject if the information does not appear to be relevant or helpful.

Furthermore, there has been emerging research and use of AI in L&D and Higher Ed to create learning pathways based upon previous job roles, skills, interests, and completed courses or certifications. As Bayireddi explains, "AI technology is reinvigorating the role of career pathing in employee satisfaction and retention, succession planning, workforce planning, and overall productivity and profitability. By doing much of the heavy lifting, AI can efficiently match employees to suitable next-step positions based on their profiles, similar to how it aligns external candidates to recommended roles" (Bayireddi, 2020). Certainly, this is a positive outcome and can help the workforce keep up with the ever-changing needs of the business world. This might also help ensure more equity among all levels of a company. Skill development, career pathing and access to "next step" jobs will not be relegated to the lucky few that have found a strong advocate or advisor within the company.

However, consider the user experience with Netflix for a moment. If one has familiarity with Netflix, you know that AI is used to recommend shows to the viewer based on previous shows and movies the viewer has consumed. This can be very helpful for ensuring the consumer finds programs they like based on their previous viewing, however, after consuming hundreds of hours of documentaries what happens if you are in the mood for a comedy? It can sometimes be difficult to find a varying genre because Netflix keeps recommending documentaries. Therefore, we could see the same challenges emerge with AI powered learning pathways. A professional or student that has been on a very specific path will get recommendations from AI that keeps them on that path. Changing job paths or majors may be more challenging as AI continues to recommend similar paths and does not allow deviation. Innovation and creative thinking typically come out of challenges where we must consider possible solutions and, in some cases, dream up something that has not yet existed. Students, professionals, and educators must not lose the ability to be creative and dream as we increasingly use AI to assist us.

AI can enhance what humans do and it is imperative we educate, reskill and upskill people so that they can work collaboratively with AI, understand it’s benefits and limitations and make thoughtful, well informed decisions about how and why they are using it. An epistemological approach is critical to teach and nurture in all learners so that they are constantly questioning the source of knowledge and keeping a critical eye on the information that AI is giving them. AI is very good at making decisions based on large amounts of data but cannot explain how it got the answer or identify bad, corrupted or biased data. As Hong Qu explains, “Computer code has a lot of assumptions built in. Any time you write code to do something, it’s representing your worldview and values…The inequity inherent in codifying these rules in software code has real, high-stakes impact on people’s lives and livelihood, such as peddling confirmation bias and extremism” (Rubenstein, 2021).?

A real-world example of Qu’s critique is evident from the Microsoft AI experiment on Twitter in 2016. Tay was built to be a conversational chatbot designed to learn and get smarter the more it interacted with humans on Twitter. However, in less than 24 hours, Tay’s conversations turned from playful to “racist, inflammatory and political statements” (Hunt, 2016). Thus, demonstrating that AI is only as effective as the data it is feed. Bad data in, bad results out, and since AI is unable to think critically about the data it is being fed it is up to humans to understand the limits of AI so that we do not find ourselves in a future where AI is trusted implicitly to answer all of our questions and solve all of our problems.

As explained by Taneri, “Computers best humans in repetitive and predictive tasks; in jobs that rely on computational power, big data, and decisions made based on distinct rules; and in enumeration and evaluation of data. Humans, on the other hand, best machines in experiencing genuine emotions and building relationships; in formulating questions and articulating ideas; in making strategic decisions; and in making products and results to the benefit of humans” (Taneri, 1). It is, therefore, both na?ve and foolish to think that we can continue to train and educate people in the same ways we have. As we consider the ways in which AI may shape or change the future of work, L&D and education it is important to focus on explicitly developing skills that cannot be automated or replaced by AI. Those skills can be broken down into four domains of “ interpersonal skills (e.g., communications, teamwork, and leadership), intrapersonal skills (e.g., adaptability, initiative, discipline, ethics, persistence, and the ability to reflect on your process of learning to help you learn more effectively), cognitive skills and abilities (e.g., problem solving, critical thinking, and creativity), and technological skills (stand-alone as well as for enrichment of the first three skills and abilities)” (Taneri, 2020). It is in the best interest of both education and L&D to focus on these domains as it will mean a workforce that is prepared to address the needs of today and the needs of the future.

To conclude we have defined artificial intelligence and explored some of the current and future applications of this technology in education and L&D. We have also explored the implications and uses of this technology both positive and negative. As we usher in what many are calling the Fourth Industrial Revolution with the rapid emergence and integration of AI, robotics, and other technologies it is important to remember that “a critical part of education is building student agency – helping student’s own their learning, make decisions, become lifelong learners, and develop their metacognitive skills” (EdSurge, 2016). And therefore, all people that work in the education and training space must focus on how AI can enhance what we do, how we us this technology ethically, and how we can encourage and teach people the skills that cannot be replaced by machines.

References

7th Empire Media. (2020). Coded Bias. https://www.codedbias.com/about.

Advani, V. (2021, February 3). What is Artificial Intelligence? How Does AI Work, Applications and Future? GreatLearning Blog: Free Resources what Matters to shape your Career! https://www.mygreatlearning.com/blog/what-is-artificial-intelligence/.

Autor, D., Mindell, D., & Reynolds, E. (2021). (rep.). The Work of the Future: Building Better Jobs in an Age of Intelligent Machines. MIT Work of the Future. Retrieved from https://workofthefuture.mit.edu/research-post/the-work-of-the-future-building-better-jobs-in-an-age-of-intelligent-machines/

Barr, A. (2015, July 1). Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms . https://www.wsj.com/articles/BL-DGB-42522.

Bruce, J. D. (1913). Human Automata in Classical Tradition and Mediaeval Romance. Modern Philology, 10(4), 511–526. https://doi.org/10.1086/386901

EdSurge. (2016). (rep.). Decoding Adaptive (pp. 1–60). London: Pearsons.

Flaherty, C. (2020, May 11). Online proctoring is surging during COVID-19. https://www.insidehighered.com/news/2020/05/11/online-proctoring-surging-during-covid-19.

Hunt, E. (2016, March 24). Tay, Microsoft's Ai chatbot, gets a crash course in racism from Twitter. The Guardian. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter?CMP=twt_a-technology_b-gdntech.

Kasperkevic, J. (2015, July 1). Google says sorry for racist auto-tag in photo app. The Guardian. https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app.

Kolchenko, V. (2018). Can Modern AI replace teachers? Not so fast! Artificial Intelligence and Adaptive Learning: Personalized Education in the AI age. HAPS Educator, 22(3), 249–252. https://doi.org/10.21692/haps.2018.032

Luckin, R. (n.d.). Ai and Education: the Reality and the Potential. Museum of London. https://knowledgeillusion.blog/2019/04/30/ai-and-education-the-reality-and-the-potential/.

MarketWatch. (2021, May 25). Global Machine Learning in Education Market 2021 Research by Top Manufacturers, Segmentation, Industry Growth, Regional Analysis and Forecast by 2026. MarketWatch. https://www.marketwatch.com/press-release/global-machine-learning-in-education-market-2021-research-by-top-manufacturers-segmentation-industry-growth-regional-analysis-and-forecast-by-2026-2021-05-25.

May , K. T. (2013). How did supercomputer Watson beat Jeopardy champion Ken Jennings? Experts discuss. TedBlog. TEDX. https://blog.ted.com/how-did-supercomputer-watson-beat-jeopardy-champion-ken-jennings-experts-discuss/.

Rubenstein, L. (2021). What Artificial Intelligence Can't See. Wesleyan University Magazine, (1), 25–27.

Sigelman, M., Bittle, S., Hodge, N., O'Kane, L., & Taska, B. (2021). (rep.). After the Storm: The Jobs and Skills that will Drive the Post-Pandemic Recovery. Burning Glass Technologies. Retrieved from https://www.burning-glass.com/research-project/after-storm-recovery-jobs/

Taneri, G. U. (2020). ARTIFICIAL INTELLIGENCE & HIGHER EDUCATION: Towards Customized Teaching and Learning, and Skills for an AI World of Work (pp. 1–10). Berkeley, CA: University of California - Berkeley.

Vincent-Lancrin, S., & van der Vlies, R. (2020). (working paper). Trustworthy artificial intelligence (AI) in education: promises and challenges (pp. 1–17). Paris: OECD.

Yin, D. (F. (2019, December 3). Computer Vision in Education - What Can AI See. Medium. https://medium.com/alef-education/computer-vision-in-education-what-can-ai-see-84d679d12a79.

Rachel Hallowell

Leadership, change management, & organizational development | Human Resources professional

3 年

Oh man, what a cool topic. Saving this to dive into later (and more in the future) - THANK YOU for sharing everything you're learning!!!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了