6 Big Conversations about AI that L&D Should be Having (Part 1)

6 Big Conversations about AI that L&D Should be Having (Part 1)

With thanks to Christopher King and ChatGPT for writing support


Last week I turned the usual webinar style upside down in order to host an open conversation about artificial intelligence in learning & development. The folks at Training Magazine Network were gracious enough to let me play with their format: instead of a speaker showing up with content, I brought questions for the audience to grapple with. Big existential questions about how AI will affect our future, and how we will impact the future of AI.

I asked 6 influential thinkers in the space to pose a wicked question via video, then we discussed each challenge. Below are the questions and AI-supported summaries of the conversation. Note that these aren't the only big questions ... just 6 that came to mind (with more on the way!).

For the full session recording, head over to Training Magazine Network. As much as this article might summarize the conversation, the real thing was more engaging and collaborative. I invite you to join me for a future one of these!

And now on to the summary ...

??

Topic 1. Evolution of Educators and L&D Professionals

How will the role of educators and L&D pros evolve when AI can personalize experiences, have empathy, and be a personal mentor?

Speaker: Josh Cavalier , JoshCavalier.ai

Video Summary: Josh Cavalier explores how AI could fundamentally change the roles of educators and L&D professionals. He reflects on the evolution of technology, from early computers to mobile devices, and how each shift has transformed the way learning happens. However, AI introduces a deeper challenge, raising questions about what it means for L&D when machines can deliver personalized experiences, exhibit empathy, and act as mentors. Josh emphasizes that while AI offers tremendous potential, it also requires L&D professionals to redefine their roles—moving away from traditional content delivery and toward facilitation, oversight, and mentorship.


Key Points Raised by the Group:

AI Enhances Productivity but Cannot Replace Human Empathy Participants discussed how AI tools can improve efficiencies in learning but cannot replicate the nuances of human connection. AI can automate administrative tasks, personalize learning paths, and provide continuous feedback, but it lacks the emotional intelligence needed to foster soft skills effectively. One participant noted:

“AI is not human. It cannot teach soft skills effectively. I think our roles may shift from hard skills to soft skills.”

The group agreed that L&D professionals will need to focus more on developing interpersonal skills and empathy, areas where AI falls short, while using AI to handle routine tasks.

L&D Shifts to Strategy and Facilitation As AI takes over content creation and delivery, participants emphasized that L&D professionals will shift from being instructors to facilitators and strategic partners. AI will serve as a tool for personalized learning, but humans will still need to curate and validate its outputs. One participant shared the idea that “someone (us) has to place the guardrails,” emphasizing the need for L&D professionals to ensure that AI-driven learning aligns with business needs and ethical standards.

Mastering AI Literacy to Guide and Validate Outputs There was strong consensus that L&D professionals must develop AI literacy to stay relevant. Participants stressed that the effectiveness of AI depends on how well it is prompted, managed, and integrated into the learning process. L&D teams will need to train others to use AI effectively while continuously monitoring and validating the results. As one participant put it:

“We must embrace it so we can teach it to others. Others in our arena may use it and if we do not, we will fall behind.”


Topic 2. Sustainable AI

AI and Climate Change

Speaker: Margie Meacham , LearningToGo

Video Summary: Margie Meacham discusses the paradox of AI’s role in addressing climate change. AI has the potential to optimize energy systems, predict environmental disasters, and drive solutions for sustainability. However, these benefits come at a cost—AI consumes enormous amounts of energy, with data centers alone projected to use 3–8% of global electricity by 2030. Margie urges L&D professionals to reflect on the trade-offs between the environmental impact of AI and its potential benefits. She challenges participants to consider how L&D can promote sustainable AI use while balancing efficiency with environmental responsibility.

Key Points Raised by the Group:

AI’s Environmental Impact Must Be Addressed Participants were surprised by the extent of AI’s energy consumption, with some admitting they hadn’t been fully aware of its environmental impact. There was discussion about the need for L&D professionals to learn about and address these issues, promoting responsible AI use within their organizations. As one participant shared:

“I admittedly don't know a lot about the environmental impact of AI (how much energy it consumes), so I guess my first task as L&D is to learn more about that.”

L&D Can Promote Energy-Efficient AI Use The group agreed that L&D professionals could advocate for policies and practices that align with both business goals and environmental sustainability, making AI’s use more intentional. Several participants discussed how AI could improve its own efficiency through better design and management. There was enthusiasm about the potential of renewable energy and emerging technologies like quantum computing to mitigate AI’s environmental footprint. One participant suggested:

“The energy usage of AI has to be taken into account. Maybe the increased usage of AI can drive the development of better sources of energy.”


Topic 3. Redefining Competency

Do we need to redefine what it means to be competent when AI is doing most of the heavy lifting?

Speaker: Christopher King , CRK Learning

Video Summary: Chris King explores how AI is reshaping the concept of competency, shifting it from mastering tasks to developing higher-order skills. As AI increasingly handles routine work, competency will no longer focus on technical skills alone but on strategic oversight, critical thinking, and the ability to collaborate effectively with AI systems. King emphasizes that L&D professionals need to drive this transition by fostering continuous learning and helping workers adapt to an AI-enhanced environment. He challenges L&D to align competency frameworks with AI’s capabilities, ensuring that employees can leverage these tools without losing essential human judgment and creativity.

Three Key Points Raised by the Group:

AI Literacy is a New Core Competency Participants emphasized that AI literacy is becoming a critical skill in today’s workforce. It is not just about knowing how to operate AI tools but about understanding their limitations and ensuring their outputs align with business objectives. Participants noted that AI tools, if not properly managed, could produce inaccurate or misleading results, necessitating a blend of human oversight and AI collaboration. As one attendee put it:

“Results always need to be verified. AI hallucinations are real and hard to detect. We need to be good as the best journalists in checking and verifying.”

Strategic Thinking and Problem-Solving Take Priority The group agreed that L&D needs to shift focus from task-based skills to fostering problem-solving abilities and strategic thinking. As one participant explained, AI can handle repetitive work, but human skills will still be needed to interpret results and make decisions. This reflects the growing importance of adaptability and critical thinking over rote knowledge, reinforcing the need for employees to learn how to apply AI tools to complex challenges effectively.

“Competence has already shifted with the use of the internet. I don't commit to memory anything I can find again quickly.”

The group decided maintaining competency in an AI-driven landscape requires both employees and organizations to embrace lifelong learning practices.


Topic 4. Missed Ideas and Innovation

What happens to the ideas we never get to see?

Speaker: Scott Provence , elb

Video Summary: Scott Provence raises a thought-provoking question about the ideas that get overlooked or lost when AI systems prioritize the most predictable outcomes. He argues that AI excels at producing efficient, statistically probable solutions but tends to suppress outliers—those unconventional ideas that often lead to true innovation. Provence challenges L&D professionals to reflect on how organizations can foster creativity and innovation in an AI-driven environment. He emphasizes the importance of recognizing and preserving these "discarded" ideas, as they often play a crucial role in reaching groundbreaking solutions.

Key Points Raised by the Group:

AI Optimizes for the Average, Risking Innovation Loss Participants agreed that AI’s pattern-based approach risks flattening creative processes by favoring common or predictable outcomes. This tendency could stifle innovation, as AI may overlook unconventional ideas or solutions that fall outside statistical norms. As one participant remarked:

“AI creates the most likely response based on MATH ... but innovation comes from the outliers.”

This sparked a discussion on how L&D can counteract AI’s bias toward the average by creating space for more experimental and creative thinking.

The Role of Humans in Capturing and Revisiting Discarded Ideas The group discussed the importance of human involvement in ideation to capture and revisit ideas that AI may overlook or dismiss. Several participants noted that discarded ideas can provide valuable insights and spark innovation later on. One participant compared the importance of tracking ideas to revisiting math work:

“It’s a similar loss we see when failing to ‘show your work’ in math class. Innovation is a uniquely-human endeavor.”

Using AI to Push Boundaries with Outlier Thinking While AI tends to optimize for predictable results, participants explored the potential of using AI intentionally to explore edge cases and unconventional ideas. They discussed how thoughtful prompt engineering could help AI generate less conventional outputs, offering new avenues for creativity. As one participant suggested:

“Why couldn’t we use AI to consider only the outliers when we’re trying to push boundaries?”


Topic 5. AI, Social Equity, and Bias

How does the adoption of AI in the decision making and regulatory processes affect our ability to achieve true social equity, and could it inadvertently reinforce the biases it aims to dismantle?

Speaker: Jess Jackson, MBA, M.Ed , Social Equity Director, State of Minnesota

Video Summary: Jess Jackson explores the intersection of AI, decision-making, and social equity, raising concerns that AI systems—while powerful—can reinforce the same biases they aim to overcome if not managed carefully. She emphasizes that AI is increasingly involved in shaping policies and regulations, making it essential to address issues of data integrity, transparency, and oversight. Jackson urges L&D professionals to take an active role in mitigating bias within AI models by curating data sources intentionally and ensuring that AI is used to promote fairness rather than perpetuate existing inequalities.

Key Points Raised by the Group:

AI Systems Can Reinforce Existing Inequities Participants noted that because AI relies on historical data, it tends to replicate the biases embedded in that data, which can perpetuate systemic inequalities. The group thought it was important to challenge AI outputs while proactively identifying and mitigating biases within data sets. One participant reflected on how AI might “maintain the status quo” because it processes past information without accounting for the need to foster equity. Another echoed this concern, remarking:

“It’s looking at data from the past. It’s not looking for social equity.”

The Role of Prompt Engineering and Human Oversight in Mitigating Bias The group discussed the importance of thoughtful prompt engineering and continuous human oversight to reduce bias in AI systems. Participants acknowledged that bias is often inevitable in large language models, making it essential to plan for this from the start. As one participant explained: “We need to assume that AI will be biased and plan on counteracting it.” As L&D professionals, we can help by training teams on how to write unbiased prompts and carefully vet AI-generated content to ensure it aligns with organizational goals for equity.

AI as a Tool, Not a Solution, for Social Equity Participants highlighted that while AI can assist in promoting equity, it cannot replace human judgment and intervention. They noted that AI’s effectiveness depends on how it is used and the people managing it. Several participants emphasized the need to approach AI with caution, ensuring that it serves as a tool to support equity goals rather than replace human efforts. As one participant stated:

“Anything AI-generated is not gospel. We need to vet everything.”


Topic 6. Ethics of AI Use

Speaker: Josh Penzell , elb

Video Summary: Josh Penzell raises provocative questions about AI ethics, asking whether it is fair to create AI that mimics human behavior without possessing humanity. He highlights how the rapid advancement of AI makes it difficult to distinguish between human and AI-generated actions, raising concerns about authenticity, privacy, and accountability. Penzell challenges L&D professionals to reflect on the moral implications of deploying AI in ways that blur the lines between human agency and automated processes, urging organizations to adopt thoughtful frameworks for ethical AI use.

Key Points Raised by the Group:

AI Can Blur the Line Between Human and Machine Participants expressed concerns about AI's ability to mimic human behavior so effectively that it becomes difficult to tell the difference. This sparked discussion about the potential misuse of AI-generated content and the challenges it poses for transparency and trust. One participant commented:

“Not everyone can tell what’s AI now.”

This highlights the need for responsible AI tools use includes ensuring users understand when they are interacting with AI versus a human.

Ethical Use Requires Human Responsibility and Oversight The conversation stressed that AI itself is not inherently ethical or unethical—its impact depends on how it is used. Participants discussed the importance of human oversight and the need for rigorous validation of AI-generated content. One attendee compared AI to a tool, stating:

“Is a hammer ethical? What if I use it to break a window to steal from a store?”

This illustrates the group’s consensus that ethical outcomes depend on human intentions and the responsible deployment of AI tools.

AI’s Potential for Harm Must Be Considered Participants also raised concerns about AI’s ability to cause harm, whether through deepfakes, misinformation, or intellectual property violations. The group stressed that L&D professionals must lead discussions on AI’s potential risks and limitations, advocating for safeguards to protect users and organizations by embedding ethical considerations into the design and implementation of AI systems. As one attendee reflected:

“We are fast approaching a time when we can’t validate what is AI or not, and that means that our own safety, ownership, copyright, etc. are in jeopardy.”

?


?Finally, a heartfelt thank you to Training Magazine Network, the six brilliant contributors, and all the participants who leaned into this experiment with curiosity and thoughtful engagement.

?This conversation is just the beginning. I look forward to continuing these dialogues and gathering even more wicked questions that challenge how we think about AI in learning and development. As we move forward, I encourage L&D professionals to look beyond AI’s current role in speeding up content production and explore how it can drive deeper, strategic transformations in our field. The future of AI in L&D isn’t just about doing things faster—it’s about doing the right things, smarter. Let’s keep the conversation going.


MyKim Tran

Director of Learning + Development

4 个月

Megan Torrance thank you for this share! I was looking for a succinct way to convey what I learned from you and your team and this is such a valuable resource that I can share again.

回复
Pamela Swanson

Strategic Learning Architect & Performance Innovator

4 个月

As L&D professionals, we’re not just adapting to AI—we’re actively shaping it. The way we design, develop, and deliver learning experiences contributes to how AI evolves and responds. By embedding empathy, ethics, and inclusivity into the data and models it learns from, we’re ensuring that AI becomes a tool that truly supports human growth and potential. This isn’t just the future of technology; it’s the future of learning, and it’s exciting to see how our collective efforts are paving the way. #EducatingAI #L&DLeadership #HumanCenteredAI

Cliff Rockstead

Adjunct Professor at Nashville State Community College

5 个月

Very interesting approach. Thanks for sharing.

回复

要查看或添加评论,请登录

Megan Torrance的更多文章

社区洞察

其他会员也浏览了