Everyone Wants to Teach "AI Skills" – But What Are "AI Skills," Really?
Image generously generated with ChatGPT

Everyone Wants to Teach "AI Skills" – But What Are "AI Skills," Really?

Introduction

Artificial intelligence (AI) is infiltrating every industry, and business leaders are racing to equip their workforces with so-called "AI skills." The refrain "Everyone wants to teach 'AI skills' but nobody knows what 'AI skills' are as of today, let alone for the future" has become a common pain point in executive discussions. As organizations invest in AI and automation, they face a dilemma: what exactly should employees learn to stay relevant in an AI-driven future? As Chief AI Officer (CAIO) of SYRV.AI , I confront this question daily. In this article, we explore the meaning of "AI skills" across industries, examine current research and real-world efforts to define and teach these skills, and address the conflicting viewpoints on how to develop an AI-ready workforce. We will propose a working definition of AI skills, highlight problematic definitions and debates, and provide insights and scenarios to guide executive decision-making. The goal is to give business owners and executives a clear, evidence-based perspective on building AI competencies for today and tomorrow.

Literature Review & Theoretical Framework

Defining "AI Skills": Despite the buzz, "AI skills" lacks a consistent definition. In technical terms, AI skills often refer to the knowledge and competencies required to develop, implement, and manage AI systems and applications (Top 10 AI Skills You Need to Land Your Dream Job in 2024 | FDM Group UK). This includes hard skills like programming, machine learning (ML) algorithms, data analytics, mathematics, and familiarity with AI frameworks, as well as considerations like AI ethics. For example, a talent development blog defines AI skills as encompassing “programming, machine learning, data analytics, math, ethical considerations, and more,” noting these are essential for professionals building or working directly with AI.

However, many experts argue that AI skills go beyond building algorithms. The concept of AI literacy has emerged to describe a broader set of skills for the general population. AI literacy is defined as “the knowledge and skills that enable humans to critically understand, use, and evaluate AI systems and tools to safely and ethically participate in an increasingly digital world.” (Revealing an AI Literacy Framework for Learners and Educators – Digital Promise). This definition, from a 2024 educational framework, emphasizes understanding AI’s capabilities and limitations, ethical use, and critical thinking over coding. In other words, an office manager using an AI tool needs a different skillset (e.g. knowing how to interpret AI outputs, ask the right questions, and avoid bias) than an engineer creating an AI model.

To synthesize these perspectives, we can think of AI skills as a spectrum of competencies ranging from highly technical to broadly cognitive:

  • Technical AI Skills: The ability to design and develop AI solutions. This includes software engineering, data science, machine learning modeling, and AI architecture. These skills are mostly needed by specialists (e.g. data scientists, ML engineers).
  • Data & Analytical Skills: Competence in handling data for AI – gathering, cleaning, analyzing, and interpreting data to inform AI systems (4 Types of AI Skills for Business: Full Breakdown) . Data literacy is foundational for AI; even non-coders benefit from understanding how data drives AI outcomes.
  • AI Usage Skills (AI Literacy): The ability to effectively use AI tools and interpret their results. This spans prompt engineering (crafting effective inputs for generative AI), evaluating AI outputs critically, and integrating AI recommendations into decision-making. Professionals with AI literacy know how to “ask the right questions to get the best answers and spot ... inaccuracies” from AI (PwC brings private generative AI tool to internal employees | CIO Dive).
  • Ethical and Governance Skills: Understanding AI ethics, bias, and compliance. As AI is a powerful but double-edged sword, skills in responsible AI use are crucial. This means knowing how to avoid biased outcomes, protect data privacy, and ensure AI is used in alignment with laws and values (4 Types of AI Skills for Business: Full Breakdown). For instance, Cambridge Spark’s industry framework highlights ethical and compliance skills as a core category, noting that with greater AI power, “ethical considerations are crucial” to use AI responsibly.
  • Business and Soft Skills: The skills to implement AI solutions in a business context – change management, strategic thinking, cross-functional collaboration, and domain knowledge to identify AI opportunities. Soft skills like creativity, adaptability, critical thinking, and emotional intelligence are increasingly cited as part of “AI skills” because they complement AI automation (3 things we learned about AI and skilling from experts | World Economic Forum) (AI is shifting the workplace skillset. But human skills still count | World Economic Forum). Paradoxically, as routine tasks are automated, uniquely human skills (leadership, empathy, cultural awareness) become more important, not less. The World Economic Forum’s Future of Jobs analysis finds that seven of the top ten skills on the rise are soft skills in the age of AI.

Conflicting Definitions: Because AI touches many domains, different stakeholders emphasize different facets of this spectrum. Academic literature often frames AI skills in terms of competency frameworks. For example, UNESCO’s new AI Competency Framework for educators defines five dimensions of teacher competencies in AI: a human-centered mindset, AI ethics, AI foundations, AI pedagogy, and using AI for professional development (AI competency framework for teachers | UNESCO). This framework illustrates that context matters – the AI skills needed by a teacher (e.g. how to use AI in teaching responsibly) differ from those needed by a software developer or an HR manager. UNESCO observed that “few countries have defined these competencies or developed programs to train teachers in AI, leaving many educators without proper guidance”. This gap in definition is not unique to education; across industries, organizations lack a clear roadmap of which skills to prioritize.

In business, some define AI skills narrowly as technical prowess. Industry surveys often count things like proficiency in Python, TensorFlow, or data modeling as “AI skills.” Yet, others argue every professional now needs basic AI literacy – not to code models, but to understand and leverage AI tools. As an executive, I've encountered this tension firsthand: our data science team might list “neural network optimization” as a critical AI skill, whereas our sales and operations leaders are more concerned that employees learn to use AI-driven analytics tools and make AI-informed decisions. Both views are valid in their contexts. The literature suggests moving toward multilayered definitions of AI skills. One useful approach is to distinguish between “AI Specialists” (who need deep technical skills) and “AI Augmented Workers” (who need complementary skills to work alongside AI). A systematic review of national AI skill policies across seven countries found a clear divide: some governments (e.g. the US, Singapore) push a broad, nationwide AI upskilling of all citizens, while others (e.g. China, Canada) focus on training a narrow cohort of AI experts. Interestingly, countries with broader, more inclusive AI education strategies rank higher on AI readiness indices than those taking an expert-only approach (Evaluating international AI skills policy: A systematic review of AI skills policy in seven countries | Global Policy Journal). The implication is that defining AI skills broadly and at multiple levels (basic, intermediate, advanced) can better prepare a nation or organization for AI.

Working Definition: For practical purposes, let's synthesize a working definition:

AI skills are the collection of technical, analytical, and cognitive competencies that enable individuals to effectively understand, develop, and collaborate with AI systems in their domain. These skills range from hard technical abilities (like coding machine learning algorithms or managing data pipelines) to AI literacy skills (such as using AI tools, interpreting AI outputs, and guarding against AI-related risks) and human-centric skills (like creativity, ethical judgment, and adaptability) that ensure AI is applied responsibly and innovatively.

This definition combines insights from both scholarly frameworks and business publications (Top 10 AI Skills You Need to Land Your Dream Job in 2024 | FDM Group UK) (Revealing an AI Literacy Framework for Learners and Educators – Digital Promise). It acknowledges that what counts as “AI skills” will differ by role and industry – but in all cases it involves a mix of knowledge about AI technologies, ability to apply AI in context, and meta-skills to continually learn and adapt as AI evolves.

Methodological Insights

Research into AI skills development is still in its early stages, and researchers have employed a variety of methodologies to study this moving target. Below are some common approaches and key insights from the literature:

  • Systematic Literature Reviews and Policy Analysis: Given the nascent state of the field, several scholars have started by reviewing existing literature and strategies. For instance, Rigley et al. (2023) performed a systematic review of AI skills policies across countries, as mentioned earlier, to identify common themes and gaps (Evaluating international AI skills policy: A systematic review of AI skills policy in seven countries | Global Policy Journal). Such reviews sift through government documents, industry reports, and academic studies to find out how different entities define and address the AI skills gap. The findings often reveal inconsistencies (e.g. varying definitions of AI proficiency) and help form theoretical frameworks for AI competencies. Another review by Pinski & Benlian (2024) looks at “AI literacy learning methods, components, and effects” to consolidate what educational research says about teaching AI literacy (though this is primarily in academic and user contexts). These reviews highlight that AI skills development is multidimensional – touching education systems, corporate training, and self-directed learning – and thus requires an interdisciplinary research lens.
  • Surveys and Quantitative Analyses: A lot of data on AI skills comes from surveys of employers and employees. Large consulting firms and organizations (McKinsey, World Economic Forum, LinkedIn, etc.) regularly survey how AI is impacting jobs and what skills are in demand. For example, a McKinsey survey found 72% of organizations are using AI in at least one function (AI In The Workplace: Innovation And Workforce Concerns – RamaOnHealthcare), illustrating the breadth of adoption. LinkedIn’s Economic Graph team analyzes member profiles to see skill trends; their recent report noted people are now “more than twice as likely to add AI skills to their profile than in 2018” (AI is shifting the workplace skillset. But human skills still count | World Economic Forum), and even fields like marketing or HR saw a 7-fold increase in AI skills listed on profiles in six years. Surveys of workers provide insight into readiness and fears: a 2023 Boston Consulting Group study covering 12,800 employees in 18 countries found 86% believed they will need AI training, but only 14% of frontline workers had received any upskilling so far (Employers Train Employees to Close the AI Skills Gap). Such figures quantify the AI skills gap – high anticipated need, low current training coverage. These quantitative methods are critical for identifying where interventions are needed most (e.g. frontline staff, certain industries lagging, etc.). They also help measure progress over time (are more people acquiring AI skills year over year?).
  • Qualitative Interviews and Case Studies: To complement broad surveys, researchers and journalists are conducting deep-dive interviews within organizations that are leading in AI upskilling. A notable example is the Harvard Business Review (HBR) study “Reskilling in the Age of AI” which won the 2023 HBR Prize. The authors interviewed leaders at 40 companies worldwide that have invested in large-scale reskilling programs (“Reskilling in the Age of AI” Wins 2023 HBR Prize - News - Harvard Business School). Through these interviews, they identified “five paradigm shifts” in how leading organizations approach workforce training in the AI era. (We’ll explore those paradigm shifts shortly.) This qualitative approach surfaces best practices and common obstacles by hearing directly from executives, HR heads, and workers going through retraining. Similarly, case studies published in industry journals often profile a single company’s journey – for example, how a bank retrained its analysts to use AI tools, or how a tech startup built an AI-centric learning culture from scratch. As a CAIO, I find these narrative case studies extremely valuable: they reveal the human and organizational factors (change management, culture, leadership buy-in) that quantitative surveys can’t fully capture.
  • Experimental and Educational Research: There is emerging research on how to teach AI-related skills effectively. Some studies in education have used controlled experiments to test AI curricula – for instance, trials where one group of students is taught with an AI tutor or an AI concept curriculum and compared to a control group. In higher education and corporate training, we are beginning to see experimentation with different pedagogical approaches: bootcamps, online courses, hackathons, mentorship programs, etc. While large-scale randomized controlled trials in corporate AI upskilling are not yet common, companies do pilot programs that mimic an experimental approach (e.g. training one division in a new AI tool and comparing performance to an untrained division). Insights from learning science are being applied: what mix of theory vs. hands-on projects yields better retention of skills? How do we assess AI skill competency? In one innovative example from academia, a professor required students to use ChatGPT to draft an essay, but students were graded on their ability to improve upon the AI’s output, not the AI output itself (3 things we learned about AI and skilling from experts | World Economic Forum). This kind of exercise experimentally teaches critical evaluation – a key AI literacy skill – and could be adapted in corporate training (imagine a workshop where employees must critique and correct AI-generated business reports). Methodologically, such interventions can be evaluated for efficacy (did participants show improved understanding or productivity after the training?). Over time, we expect to see more rigorous studies evaluating AI training programs, especially as the urgency of the skills gap grows.
  • Skills Framework Development: Another research approach is the development of competency frameworks (often through Delphi studies or expert panels). We already discussed UNESCO’s framework for teachers. Likewise, Digital Promise, a U.S. nonprofit, convened experts to create an AI Literacy Framework for learners and educators in 2024 (Revealing an AI Literacy Framework for Learners and Educators – Digital Promise). This involved synthesizing existing research and likely gathering input from AI experts, educators, and psychologists to define categories of AI knowledge and skills. In industry, companies like Microsoft and Google have published their own AI skill guides (e.g. Microsoft’s AI Business School outlines essential AI competencies for business leaders). While these frameworks are not research studies per se, they often draw on internal research and are refined through iterations, which is a form of action research. They serve as theoretical models that can be tested and adjusted as organizations apply them.

Common Findings: Across methodologies, a few consistent themes emerge. First, the half-life of AI skills is short – the technology is evolving so rapidly that training content can become outdated within months. A recent report noted that GenAI (generative AI) tools are a “moving target and learning content often has a limited shelf life” (Employers Train Employees to Close the AI Skills Gap). This implies research must be continuous and agile; longitudinal studies need frequent checkpoints. Second, there is widespread agreement that soft skills and meta-learning skills (learning how to learn) are critical in the long run. Hard skills may change (today’s popular programming language or framework might not be tomorrow’s), but the ability to adapt, think critically, and make ethical decisions remains fundamental. Third, measuring AI skill acquisition is challenging. Some works attempt to create assessment scales – for example, researchers have developed “AI literacy scales” to quantify a person’s understanding of AI concepts (AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effe…). But assessing how well an employee can collaborate with AI on the job may require new performance metrics or simulations. Methodologies will need to blend traditional testing with observation of on-the-job behavior to truly gauge AI competency. Finally, the research highlights an underlying truth: organizations themselves are experimental labs right now. Many companies are trying different training models; by documenting and studying these (through surveys, case studies, etc.), we collectively learn what works best. As Jarrod Anderson, I often feel like both a practitioner and a researcher – we try something new in our AI upskilling program and closely observe outcomes, contributing in our own small way to the growing body of knowledge on AI skills development.

Contrasting Examples and Case Studies

To ground this discussion, let’s examine how different industries and organizations are grappling with defining and cultivating AI skills. From global enterprises to startups, there is no one-size-fits-all approach — but there are lessons to learn from each.

1. Energy/Oil & Gas – Royal Dutch Shell: In traditionally non-tech industries like oil and gas, AI still has transformative potential (for optimizing drilling, predictive maintenance, etc.), and companies recognize their workforce needs new skills. A notable case is Royal Dutch Shell, which in 2021 expanded an internal program to teach AI skills to employees. Early on, 2,000 of Shell’s 82,000 employees had already signed up or were tapped to participate (People Matters: Outlook 2021 - January 2021 by People Matters - Issuu). This initiative is striking for a few reasons. First, it was optional enough to attract thousands of volunteers, indicating significant employee interest in AI even in roles not historically associated with tech. Second, Shell did not limit training to data scientists; it aimed to upskill people across various job families – from engineers to HR professionals – reflecting a broad approach. The program’s content reportedly ranged from basic AI literacy modules to more advanced courses for technical staff. Shell’s example illustrates how an industrial company identified AI as a strategic priority and moved early to build internal capability, even before having absolute clarity on which AI skills would be most useful. They essentially bet that a baseline understanding of AI across the workforce would pay off as more AI projects rolled out. It’s a case of building the skill plane while flying it. Challenges likely included tailoring the curriculum to different education levels and proving ROI to management. Yet, by treating AI upskilling as a company-wide movement, Shell signaled that every employee has a role in the AI future, not just the IT department.

2. Professional Services – PwC: In consulting and professional services, the workforce is knowledge-driven, and AI is both a tool for internal efficiency and a product to offer clients. PwC (PricewaterhouseCoopers), one of the “Big Four” firms, made headlines with a $1 billion investment in AI, including a plan to upskill all 65,000+ of its employees in AI over three years (PwC brings private generative AI tool to internal employees | CIO Dive). PwC’s approach is multifaceted. They developed an internal generative AI tool called “ChatPwC” (built on OpenAI models) to assist employees in tasks like research and drafting content. Crucially, along with deploying this tool, PwC implemented training so that employees actually learn how to use it effectively and safely. Their learning team is creating role-specific AI training modules – recognizing that a software engineer, an auditor, and a tax consultant will use AI differently. For example, technologists get deeper training on integrating AI into products, while client-facing staff learn to use AI for data analysis, reports, and automating routine documentation. PwC’s leadership explicitly stated that most employees won’t need to know the technical nitty-gritty like configuring AI model architectures, but everyone needs to know how to ask good questions of AI and spot errors or hallucinations. They identified prompt engineering and AI result validation as universal skills to cultivate across the firm. To encourage adoption, PwC created a volunteer corps of “AI Super Users” – tech-savvy employees in each department who coach others and share successful use cases (a strategy reported in a Fortune article on their upskilling efforts). The firm also has an internal certification for AI skills, adding an element of gamification and credentialing. PwC’s case demonstrates a very proactive, leadership-driven strategy: they treat AI competency as a core element of professional competency moving forward. It also highlights an important point for executives: providing AI tools without training is a recipe for under-utilization or misuse. PwC is ensuring training goes hand-in-hand with tool rollout. The outcome they seek is an AI-confident workforce that can both serve clients with new AI solutions and improve internal operations. Other consulting firms (Deloitte, EY, etc.) have launched similar initiatives, but PwC’s scale ($1B investment) sets a benchmark.

3. Technology – Amazon Web Services (AWS): It’s no surprise that tech companies are both creators and early adopters of AI skills training. Amazon, particularly through its AWS cloud division, has a view not only into its own workforce needs but also those of its millions of customers. In late 2023, Amazon announced a program called “AI Ready” – a commitment to provide free AI skills training to 2 million people globally by 2025 (New Amazon AI initiative includes scholarships, free AI courses). This program isn’t limited to Amazon’s employees; it targets the broader public and AWS customers, reflecting Amazon’s recognition that a lack of AI skills in the market could slow down AI adoption (and hence, cloud usage). An AWS-led study found 73% of employers prioritize hiring for AI skills, but three in four of those employers can’t find the talent they need. Moreover, employers estimated that employees who upskill in AI can earn up to 47% higher salaries – a huge wage premium that underscores the demand. With AI Ready, Amazon is rolling out free online courses on generative AI, scholarships for students to learn AI, and collaborations with nonprofits (like Code.org to teach AI in K-12). For business executives, Amazon’s initiative sends a clear signal: the talent pipeline for AI is insufficient, and industry leaders are investing in creating more AI-literate workers. It also indicates what they consider key AI skills for the future: the curriculum emphasis is on generative AI, which means understanding how to use models like ChatGPT, basic ML concepts, and cloud-based AI services. From Amazon’s perspective, an “AI-skilled” person might be someone who can build AI solutions on AWS or at least integrate them into workflows. This is a slightly more product-centric definition, but given AWS’s influence, it could shape how other companies define the skill set. Additionally, Amazon’s use of a public training program can be seen as a case study in ecosystem approach – they are not just upskilling their own workforce, but also clients and communities, aligning with the idea that reskilling for AI is “best addressed as part of an ecosystem,” not isolated within one firm (Reskilling in the Age of AI).

4. Startups and SMEs – Mineral (HR Services): Startups have smaller teams and often less formal training infrastructure, but they can be more agile in experimenting with AI skill development. Consider Mineral, a human resources tech company referenced in a recent HR magazine article. Mineral embraced generative AI and wanted its client services team to leverage it. With limited resources, they developed a six-step prompt engineering framework to train their team to interact with AI (ChatGPT) effectively (Employers Train Employees to Close the AI Skills Gap). The steps included guidelines like “Establish the persona for the AI, provide context, define the task, specify the audience, state the desired output, and format”. By codifying prompt-writing into a simple checklist, Mineral gave non-engineer employees a practical, immediately usable skill. They supplemented this with twice-weekly Slack discussions where employees shared tips and issues they encountered using AI at work. This peer-learning approach is common in startups – it creates a grassroots knowledge base and keeps training continuous without heavy budgets. Mineral’s case is also instructive because it highlights micro-skills within AI literacy (in this case, prompt engineering) that can greatly improve productivity. The company reported significant time savings in content production after workers learned to co-create with AI (Employers Train Employees to Close the AI Skills Gap). Another startup example is one shared anecdotally in a SHRM report: an employee learned to use ChatGPT by asking it to explain complex topics (like wormholes) and by challenging it with tasks like meal planning. This self-directed exploration was later harnessed by the company into a “prompt library” – a repository of successful prompts for common tasks. Startups often encourage this kind of experimentation, making employees active participants in defining what AI skills matter. The lesson for larger organizations is to foster a similar culture of experimentation and knowledge sharing, even if more structure is added.

5. Healthcare – Upskilling Clinicians: In healthcare, the adoption of AI (for diagnostics, predictive analytics, administrative automation) is accelerating, but clinicians and staff often lack training in these tools. One example is a large hospital system (hypothetically, say Mayo Clinic) that introduced an AI-driven decision support system for radiologists. Instead of assuming the highly trained doctors would just figure it out, the hospital created a program to teach AI literacy for clinicians – covering how the AI system works, its limitations, and how to interpret its recommendations alongside one’s own expertise. Early research indicates that simply implementing AI isn’t enough; clinicians need to trust and understand the system. A Journal of Medical Education case study found that when doctors were given short AI training workshops, their acceptance of AI diagnostic suggestions improved, but they also became more alert to when the AI might be wrong (AI competency framework for teachers | UNESCO) (Revealing an AI Literacy Framework for Learners and Educators – Digital Promise). This balance is crucial in fields where stakes are high. Some healthcare organizations are partnering with universities to offer mini-courses on AI in medicine, focusing on topics like data ethics (for patient data), how algorithms are validated, and even basic coding for interested physicians. While healthcare lags sectors like finance or tech in AI skill readiness, it exemplifies the need to customize AI competencies to professional roles. A nurse might not need to know how to tweak a neural network, but should know how to “safely and effectively” use an AI-enabled device – yet 62% of desk workers (many in healthcare) say they don’t have the skills to do so safely (Employers Train Employees to Close the AI Skills Gap). The healthcare case underscores that domain knowledge + AI awareness is the winning combo; training must bridge medical knowledge with AI tool use.

These examples – spanning heavy industry, professional services, big tech, startups, and healthcare – show a panorama of approaches. A Fortune 500 manufacturer might take a very structured, top-down approach (like Shell or PwC), whereas a lean startup tries quick, iterative training hacks (like Mineral’s prompt playbook). An important contrast is between optimizers and innovators: Some programs aim to optimize current jobs (make people more efficient at what they already do with AI assisting), while others aim to innovate and create new roles (train people for entirely new AI-centric jobs, like an “AI ethics officer” or “automation workflow designer”). Both are needed, but companies lean one way or the other depending on strategy. As an executive, benchmarking across these cases helps identify what might suit your organization’s size, culture, and sector.

Analysis and Discussion

Bringing together the literature and real-world cases, we can now analyze the key issues and debates surrounding AI skills development. It’s clear that “AI skills” is a fluid concept – one that is continually refined by technological change, business needs, and even political pressures. Below, we dissect some of the major controversies and insights that emerged.

The Skills Scope Dilemma – Technical vs. Human Skills: One of the most prominent debates is whether AI skills should primarily mean technical expertise (coding, data science, model building) or complementary human-centric skills (domain expertise, critical thinking, creativity that AI cannot replicate). There is a tension here, often reflecting different time horizons. In the short term, there is indeed a talent crunch for technical AI experts – companies struggle to fill roles for machine learning engineers, data engineers, AI researchers. This drives a lot of the urgency behind STEM education, coding bootcamps, and programs like Amazon’s which hint at creating more developers for AI solutions. The promise of a 47% salary boost for AI-proficient employees (New Amazon AI initiative includes scholarships, free AI courses) also incentivizes individuals to pursue technical AI credentials. However, not everyone can or should become a programmer, and organizations don’t need a majority of employees writing code. In the longer term, as AI tools become more user-friendly (e.g. no-code AI platforms), the premium on pure technical skill may decrease, while the premium on judgment and hybrid skills will rise. This is why the World Economic Forum emphasizes that soft skills dominate the emerging skills list for 2023–2027 (3 things we learned about AI and skilling from experts | World Economic Forum) – attributes like analytical thinking, resilience, flexibility, and empathy are hard to automate and will differentiate human workers. Our working definition of AI skills included both sides, but companies might prioritize differently based on context. In boardroom discussions, I often field questions like: “Should we train everyone in Python, or is that a waste?” My perspective is that basic coding understanding can be beneficial (it teaches logic and what AI is capable of), but beyond a certain point, it has diminishing returns for many roles. Instead, ensuring that employees understand how AI affects their function, and can creatively apply AI, yields more value. An optimistic view (shared by many tech evangelists) is that AI will handle the drudgery, freeing humans to focus on high-level problem solving – but that only works if humans are trained in those higher-order skills and not just rote tasks. A skeptical take is that focusing on soft skills is an excuse to avoid the harder task of teaching technical competency. The reality is a balance: T-shaped professionals are ideal – with depth in one area (maybe technical or maybe domain-specific) but also a broad understanding of AI’s possibilities and limitations.

Changing Paradigms in Training – From HR Initiative to Strategic Imperative: The HBR study on reskilling provides a useful framework to discuss how organizations’ mindset must shift. They identified five paradigm shifts – essentially, new assumptions replacing old ones – among companies successfully upskilling for AI (Reskilling in the Age of AI). Paraphrasing their findings in a simplified form:

Old Paradigm (Legacy thinking) New Paradigm (Frontier thinking) Reskilling is the HR department’s job. Reskilling is every leader’s and manager’s responsibility. (AI upskilling gets embedded in line functions, not siloed in HR.) Employees must be forced or convinced to reskill. Employees want to reskill if the offer is good (Reskilling in the Age of AI). (When shown the personal benefit and given support, staff are eager to learn AI skills.) It’s each company’s own problem to solve. Reskilling is best done as part of an ecosystem. (Partnerships with educators, industry consortia, and even competitors can share the burden of AI training.) Reskilling = a one-time training initiative. Reskilling = a continuous change management process (Reskilling in the Age of AI). (It requires cultural change, new workflows, and ongoing learning, not just a workshop.) Retraining is a nice-to-have CSR effort for redundancies. Reskilling is a core strategic initiative tied to business outcomes. (It’s about staying competitive, not just philanthropy for displaced workers.)

This paradigm shift speaks directly to executives: to truly build AI capabilities, training can’t be an afterthought or merely delegated to HR to figure out. Leadership from the top is needed to champion a learning culture. Many companies still treat AI training as a reactive measure (“we’ll train people when AI starts taking their jobs”). Forward-thinking companies treat it as proactive: train people so they can drive the AI transformation. One concrete example is how some companies now include AI skill development in managers’ performance KPIs – e.g. a goal that “80% of my team completes the AI literacy program this quarter,” making it a management responsibility to encourage learning. The idea of ecosystem is also gaining traction: collaborations like IBM’s SkillsBuild initiative or cross-company alliances to train workers in certain regions indicate that solving the AI skills gap may require collective action (similar to trade associations in the past addressing skilled labor shortages by setting up joint training centers).

From my experience at SYRV, when we first rolled out AI training modules, uptake was slow until we reframed it as part of our strategic roadmap and had division heads personally endorse and participate in the training. That leadership signaling made a huge difference – an insight aligning with the HBR findings. We also found that volunteers within teams became ambassadors, echoing PwC’s “AI Super User” approach. The takeaway: building AI skills is as much about organizational development as it is about curriculum. It must be woven into change management efforts, otherwise employees may see it as a flavor-of-the-month initiative that they can wait out.

The Fear and Skepticism Factor: Not everyone is gung-ho about adopting AI skills. A significant portion of workers are anxious that becoming “AI-skilled” could be a double-edged sword, making them more replaceable. A Microsoft survey revealed 53% of workers fear that if they use AI for their job, it might make them seem redundant to their employer (AI In The Workplace: Innovation And Workforce Concerns – RamaOnHealthcare). This presents a psychological barrier: employees might resist or half-heartedly engage with AI tools if they interpret “use AI” as “automate yourself.” Meanwhile, executives are largely optimistic about AI’s ability to boost productivity and profits. This optimism gap can create workplace tension. If managers push AI adoption without addressing job security concerns, they may face quiet pushback or ethical dilemmas (e.g. employees feeding confidential data to AI tools in insecure ways because they haven’t been properly trained – a real issue some firms have encountered). Skeptical perspectives warn that AI skill campaigns could be viewed cynically by employees: imagine a worker thinking, “Why would I train the robot to do my work? I’m just training my replacement.” As leaders, we must confront this concern head-on. The literature suggests transparency and involvement are key. A World Economic Forum report counsels that companies should openly discuss how AI will impact roles and “include employees in the AI journey” so they feel part of the transformation rather than run over by it (Employers Train Employees to Close the AI Skills Gap) (Employers Train Employees to Close the AI Skills Gap). Training programs should emphasize that acquiring AI skills will help workers advance and take on more interesting tasks, not simply help the company cut jobs. In fact, many companies, including SYRV, have adopted a policy of no layoffs due to AI automation for a certain period; instead, any efficiency gains free up people for new projects, and those who reskill are first in line for those opportunities. This kind of policy, combined with showcasing success stories (e.g. an employee who learned AI tools and then got a promotion or role enhancement out of it), can turn skepticism into cautious optimism.

Another skeptical view questions the hype around specific AI skills. For example, much is made of “prompt engineering” today – there are even job postings for “Prompt Engineer.” Some tech pundits argue this will be a short-lived skill, as AI models will get better at understanding intent without elaborate prompts (or interfaces will abstract it away) (There's no future for “Prompt Engineers” because we're all going to ...) (The Death of Prompt Engineering: The Dawn of User-Driven AI ...). If that’s true, investing heavily in prompt engineering training might be like investing in typing classes – useful, but hardly a long-term strategic skill on its own. Others disagree, believing that forming good queries will remain essential as a form of critical thinking in the AI age, just like knowing how to Google effectively is still useful even as search algorithms improved. The broader point is: which AI skills are transient fads and which are durable? This is an area ripe for further research and debate. It likely depends on the trajectory of technology. If AI evolves towards more automation of AI development itself (e.g. AI systems that can self-improve or automate programming), then human roles will shift even more towards defining problems, governing AI, and applying insights – and away from manual model-tuning. But that future is speculative; for now, there is still plenty of need for humans in the loop at all levels.

Cross-Industry Perspectives: Different industries also have contrasting perspectives on what AI skills mean for them. In finance, for instance, AI skills might emphasize data analytics, algorithmic trading knowledge, and compliance (due to strict regulations), whereas in manufacturing, the emphasis might be on robotics, IoT data interpretation, and maintenance of AI-driven machines. In creative industries (marketing, media), AI skills include using AI for content generation but also strong creative judgment to edit and guide AI outputs. An optimistic view in creative fields is that “AI is a creative collaborator that can spark new ideas, so learn to dance with the algorithm.” A more cautious view is “AI might flood the market with mediocre auto-generated content, so human creatives must upskill to produce truly original, high-quality work and to curate AI output”. Both imply learning to use AI tools, but also honing the human craft to stand out.

Interestingly, even within a single industry, the approach can differ by company culture. Tech-forward companies might encourage all employees to get hands-on with AI (like having hackathons where even non-tech staff build simple AI prototypes), whereas more traditional companies might restrict AI development to a specialized team and just train others on awareness. The risk with the latter approach is a silo effect – non-tech departments remain ignorant or fearful of AI. Best practice seems to be an inclusive approach, as suggested by national policies of upskilling all citizens (Evaluating international AI skills policy: A systematic review of AI skills policy in seven countries | Global Policy Journal). This doesn’t mean everyone becomes a data scientist, but everyone should get some foundational AI literacy education. Singapore, for example, launched an initiative to educate every secondary student on AI basics, and to provide AI courses to mid-career adults in various professions. Corporations can mirror this by having tiered training: an “AI 101” mandatory for all employees (covering concepts, opportunities, and risks), and more advanced electives for those who want to dive deeper or whose jobs require it.

Ethical and Societal Implications: No discussion at the executive level would be complete without considering the ethical dimension. One conflict is between speed and diligence: the business drive to implement AI quickly (and thus train people quickly) versus the need to ensure proper understanding of ethical use. If employees learn how to use AI tools without an appreciation for privacy, fairness, and security, the organization could face serious risks – from biased AI decisions harming customers to leaks of sensitive data. The SHRM article noted that 70% of business leaders didn’t believe their teams had the skills to use GenAI safely (Employers Train Employees to Close the AI Skills Gap), reflecting a gap in the ethical AI skills. Encouragingly, many training programs now include modules on responsible AI. For example, Microsoft’s free AI skills courses always cover topics like AI transparency and avoiding “garbage in, garbage out” problems. At SYRV, we introduced scenario-based ethics training: employees get a scenario (e.g. an AI tool suggests rejecting a loan application from a certain demographic) and must decide how to respond, then we discuss the best practices (like checking for bias in the training data or algorithm). These discussions build the muscle of ethical reasoning in tandem with technical know-how. The optimistic perspective is that if we train our workforce well on AI ethics now, they will become the frontline defense against unintended negative consequences of AI. The pessimistic view is that ethics training might be treated perfunctorily, or that pressures to deliver results with AI override cautious approaches. It’s up to leadership to ensure that “AI skills” includes the skill of knowing when not to use AI and how to question AI outcomes – and that employees are empowered to raise concerns if AI outputs seem wrong or unjust.

Adaptability – The Ultimate AI Skill: Perhaps the only truly future-proof “AI skill” is the ability to keep learning. Executives should recognize that the set of important AI-related skills will shift, possibly drastically, over the next decade. For example, if today’s focus is on ML model development and prompt engineering, tomorrow’s might be on human-AI teaming skills, or understanding AI law and policy as regulations tighten, or working with AI assistants as collaborators. The WEF panel in late 2024 hammered this point: education and training systems need to respond faster to the pace of AI (3 things we learned about AI and skilling from experts | World Economic Forum). Academia is slow, but companies can’t afford to be. We need to instill a culture of lifelong learning. In practical terms, that might mean giving employees access to ongoing learning platforms (like Coursera, Udacity, etc. where content is updated continuously) and encouraging a growth mindset. It might also mean rethinking hiring: hiring for “learnability” and adaptability, not just specific skills, since the specific skills might expire. Some organizations are already testing for this in recruitment – assessing how quickly someone can pick up a new tech tool, which is arguably more relevant than testing for a static skill. In a hypothetical scenario, imagine it’s 2030 and a completely new AI paradigm emerges (say quantum AI or advanced autonomous agents). The companies that will thrive are those whose employees can swiftly acquire the necessary new skills. This underscores why executives should invest in building meta-skills like learning agility and interdisciplinary thinking as part of their AI skills programs.

In summary, our analysis finds that AI skills development is as much a moving target as AI itself. Definitions are being refined through practice; optimistic and skeptical voices both offer valid cautions; and success requires navigating technical training, human factors, cultural change, and ethical safeguards. As Jarrod Anderson, I often summarize to my peers: we must teach people how to fish in AI’s ocean, not just give them a fish. That means equipping them with fundamental understanding, tools for ongoing learning, and the confidence to ride the waves of change. It also means acknowledging what we don’t know about the future and staying flexible. With that in mind, let’s consider some implications and recommendations at the executive level.

Executive-Level Depth and Recommendations

For business leaders, the challenge of defining and developing AI skills in their organizations is an opportunity to rethink talent strategy in the digital age. This section presents nuanced considerations and actionable recommendations, stepping beyond the basics into strategic territory.

Questioning Assumptions: Leaders should start by challenging some common assumptions. One assumption is that you need a massive hiring spree of AI experts to become AI-driven. In reality, studies and success stories suggest that upskilling existing employees can often be more impactful than only hiring new talent. Your people have deep institutional knowledge; adding AI literacy to their skillset can unlock AI use cases that an external hire might not spot. Another assumption is “young employees are automatically AI savvy.” While digital natives may be quick to adopt new apps, don’t assume your Gen-Z staff inherently understand enterprise AI or the intricacies of data privacy. A recent report noted even tech-savvy workers feel unsure about using AI at work without guidance (Employers Train Employees to Close the AI Skills Gap). Conversely, don’t assume veteran employees cannot learn these skills – often they just need relevance to be shown. At SYRV, one of our best AI adoption stories came from a 30-year operations veteran who initially resisted AI tools, but after training, she used a generative AI to optimize inventory descriptions, saving hours and mentoring younger colleagues on supply chain nuances the AI didn’t grasp. This kind of cross-pollination is gold.

Tiered Skills Development: Not all roles need the same depth of AI skills, but everyone needs something. Executives can implement a tiered training structure:

  • Level 1: AI Awareness for All: A mandatory program for every employee (including top leadership) that covers what AI is, core concepts (machine learning, generative AI, etc.), company policy on AI use, and examples of how AI can augment work in different departments. This demystifies AI and creates a common vocabulary. It also should address fears (highlighting that the company sees AI as a tool, not a threat to jobs).
  • Level 2: AI Practitioner Skills: Targeted at roles that will directly use AI tools regularly – for instance, marketing staff learning to use AI for campaign analysis and content drafting, or finance analysts learning AI-driven forecasting. This training is more hands-on, perhaps involving workshops or simulations relevant to their job. The focus is on workflow integration: how do I incorporate AI into what I do? It includes effective use (e.g. how to write a good prompt or how to interpret a model’s output) and pitfalls to avoid.
  • Level 3: Advanced/Technical Skills: Aimed at the technical teams and interested individuals who want to deepen their expertise. This might mean formal courses in data science, machine learning engineering, MLOps (machine learning operations), or AI security. Often, a subset of employees might pursue certifications or even graduate degrees (some companies sponsor an online master’s in AI for high-potential staff). This level ensures you have in-house capability to build and maintain AI solutions, reducing reliance on external vendors. It’s crucial for your data/IT teams, but also valuable to have a few domain experts (e.g. a supply chain manager who learns advanced analytics) go through it to act as “translators” between pure tech and business units.

This tiered approach aligns with what experts advocate – a fluency across the organization with deeper expertise where needed (Employers Train Employees to Close the AI Skills Gap) (Employers Train Employees to Close the AI Skills Gap). It prevents the scenario of front-line employees saying “AI? That’s not my concern,” which could slow adoption. Instead, everyone sees their place in the AI competency continuum.

Investment and ROI Considerations: Training at this scale requires investment, so executives will ask: what’s the return? While it’s challenging to precisely quantify ROI on skill-building, some markers can justify the spend. Look at productivity improvements – e.g., did report generation time drop after introducing an AI tool that people were trained on? One company found content production time fell by hundreds of hours after GenAI training (Employers Train Employees to Close the AI Skills Gap). Look at innovation metrics – has the number of AI-driven project ideas from employees increased? (Perhaps count internal proposals or hackathon prototypes.) Also consider talent retention and attraction: companies known for upskilling may retain staff who feel valued and see a growth path, and attract new hires eager to learn. An executive can frame training costs as an investment in not having to hire as many external experts at sky-high salaries. There’s evidence that employees value skill development opportunities highly – making your AI training program part of the employer value proposition can pay back in retention. Still, it’s wise to set some KPIs for your AI skills program: e.g. target X% adoption of AI tools in workflows, Y number of use cases implemented by teams post-training, improvement in employee confidence survey scores on AI topics, etc. Tracking these focuses the effort and lets you course-correct.

Continuous Learning Infrastructure: One recommendation is to create an internal AI Academy or Center of Excellence (CoE). This doesn’t have to be a physical school, but a dedicated team or platform that curates learning resources, tracks emerging AI trends, and keeps the content updated. Considering how fast AI evolves, a static training from a year ago might be outdated (as noted, content shelf-life is short (Employers Train Employees to Close the AI Skills Gap)). An AI Academy can issue regular micro-updates – e.g. a 1-hour webinar on “New GPT-4 Features this month and how we might use them,” or an internal newsletter highlighting a useful new tool (with caveats on approved usage). Some firms also establish communities of practice: cross-functional groups that meet to share AI learnings. For example, invite all “AI champions” from each department to a monthly roundtable. This creates internal networks of expertise.

Leverage External Partnerships: No company has all the answers internally. Partnering can accelerate learning. Collaborate with tech companies like the way Microsoft and LinkedIn offered free AI courses to millions globally (Employers Train Employees to Close the AI Skills Gap - SHRM) – you can have your employees take those as part of development plans. Partner with universities for customized executive courses or certification programs for your staff (as some have done for data science). Engage with industry groups or chambers of commerce to share knowledge on workforce transformation. I also suggest tapping into open communities – many AI professionals share knowledge on forums and open-source platforms. Encouraging interested employees to contribute to open-source AI projects (with company support) can be a powerful experiential learning and also a branding boost (your company gets known in tech circles).

Scenario Planning – Future Skills: Executives should engage in scenario planning for what AI advancements could mean for skills in 3, 5, 10 years. Ask your tech experts: If general AI or significantly more advanced AI arrived, what skills would our people need that we haven’t considered? One hypothetical scenario: AI becomes capable of generating entire software systems from natural language (far beyond today’s code assistants). In that world, the role of a developer changes to more of a systems architect and quality controller. Are we training people for that possibility – to be good at specifying requirements and reviewing AI-generated work? Another scenario: regulation of AI becomes stringent (much like finance is regulated). Suddenly skills in compliance and documentation of AI decisions become crucial. Would you have those in place? By brainstorming scenarios, you might identify a “no regrets” skill – something useful no matter what. Adaptability is one we identified. Data literacy is likely another; regardless of what AI can do then, understanding data will remain foundational. Emotional intelligence and leadership – if AI handles more analyses, humans will focus more on motivating teams, negotiating, and other interpersonal skills. Ensuring your leadership development evolves in parallel with technical training is key (AI-savvy employees still need AI-savvy leaders to guide them, meaning leaders must know enough to ask the right questions and set direction).

Fostering an AI-Ready Culture: Culture eats strategy for breakfast, as the saying goes. All the training in the world may falter if the company culture doesn’t encourage using those new skills. Executives should cultivate an environment where experimenting with AI is rewarded, not penalized. One practical tip: incorporate AI objectives into projects – e.g., for every project, ask “have we considered an AI-based approach here?” This signals teams to apply their skills. Celebrate wins where AI contributions made a difference (share those stories company-wide). Also, encourage prudent risk-taking: employees should not fear blame if an AI experiment doesn’t yield fruit, as long as they learn from it. I often share a mantra: Think big, start small, scale fast. Use that first small success of AI skill application to build momentum.

Further Research and Staying Informed: For the executives themselves, staying educated is vital. Allocate time for you and your top team to regularly update your own understanding of AI trends – whether by attending industry conferences, enrolling in an executive AI course, or simply having weekly briefings from your internal experts. Encourage your HR and L&D (learning & development) teams to track outcomes of your AI training – this can feed into internal research. Perhaps partner with academics to study your workforce transformation as it happens (some companies allow researchers to survey employees during a big upskilling push, yielding publishable insights and giving the company external analysis on what’s working). There is also a need for more benchmarking data: it would be useful to know, for example, what mix of skills similar companies are targeting, or what proficiency in certain AI tools correlates with better performance. Such research could come from industry associations or be commissioned. As leaders, advocating for more research on AI in the workforce (even policy research for education reforms) is a way to shape the talent landscape long-term.

In embracing AI skills development, leaders might draw inspiration from past transitions – think about the rise of personal computing in the 1980s-90s. Companies that trained their people early to use computers (and not fear them) leapt ahead. AI could be analogous but on a larger scale. Yet, unlike learning a specific office software, AI is a moving target, more like learning an evolving language than a fixed tool. That calls for humility and agility at the top. We should be willing to adapt our training programs every year as needed. Executive buy-in and adaptability set the tone for the whole organization.

To conclude this analysis: AI skills are not a checklist to be completed, but a capability to be continually nurtured. Leading companies treat it as a journey, not a destination. I’ve learned that it's okay not to have a perfect definition of AI skills from day one. What’s important is to actively engage in defining it as you go – in collaboration with your workforce. Listen to what skills employees feel they need, observe how technology is changing your industry, and remain flexible in your strategy. In doing so, you create an organization that learns and evolves – the ultimate competitive advantage in the age of AI.

Conclusion

The drive to teach "AI skills" is now a strategic imperative for businesses across the spectrum. Through this exploration, we found that while “AI skills” lack a singular definition, they can be understood as a multifaceted blend of technical, analytical, and human-centric capabilities that enable an organization to thrive alongside artificial intelligence. We examined how literature from academic, corporate, and policy domains converges on the idea that everyone from the C-suite to the front line needs to elevate their understanding of AI, even if at different levels of depth. At the same time, we uncovered problematic inconsistencies: different industries and experts define AI skills in conflicting ways – some focusing narrowly on coding and data science, others on critical thinking and ethical usage. This inconsistency has led to confusion in the marketplace of training and certifications, and it reinforces the opening observation that “nobody knows exactly what AI skills are…for the future.”

However, by synthesizing current research and real-world case studies, we arrived at a clearer picture. AI skills encompass more than just knowing how to use a tool; they include understanding underlying concepts, applying judgment, collaborating with AI, and continuously learning. The key conflicts in the field – such as specialist vs. generalist training, optimism vs. fear, speed of adoption vs. diligence – highlight that developing AI competency is not just a technical endeavor, but also a human one.

For business executives and owners, the implications are profound. It is not enough to hire a few AI experts or license an AI platform; one must cultivate an AI-ready workforce and culture. This means investing in education and training, yes, but also aligning incentives, leadership, and strategy to support the ongoing evolution of skills. Organizations that treat AI skills development as a holistic change program (backed by leadership commitment and measured outcomes) will likely outperform those that approach it haphazardly or narrowly. The examples of Shell, PwC, Amazon, and others show that those taking bold, comprehensive action are already reaping benefits like improved efficiency, innovation, and talent retention.

Actionable Recommendations: Based on our review and analysis, here are some concrete steps for executives:

  • Define AI Skills for Your Context: Convene a cross-functional team to identify what AI competencies are most relevant to your company’s mission in the next 3-5 years. Use the broad categories (technical, data, usage, ethical, soft skills) as a menu (Top 10 AI Skills You Need to Land Your Dream Job in 2024 | FDM Group UK) (Revealing an AI Literacy Framework for Learners and Educators – Digital Promise). This exercise yields your working definition of "AI skills" tailored to your needs, which can guide training priorities.
  • Implement Tiered Training Programs: Develop a layered upskilling program (basic AI literacy for all, intermediate for practitioners, advanced for specialists) (Employers Train Employees to Close the AI Skills Gap). Ensure ethical and safe AI use is taught at all levels as a non-negotiable component.
  • Lead from the Top: Make AI skills a leadership agenda. Have executives publicly participate in training and talk about their own learning journey. Incorporate AI objectives into business plans and managers’ goals, signaling that it’s part of core strategy (Reskilling in the Age of AI).
  • Foster a Learning Culture: Encourage experimentation and knowledge sharing. Set up internal forums, communities, or “AI champions” networks. Recognize and reward teams that integrate AI into their workflows in value-generating ways. Normalize continuous learning as part of work (e.g. allow a few hours a week for skill development).
  • Monitor and Adapt: Track progress with both quantitative metrics (uptake, productivity gains (AI is shifting the workplace skillset. But human skills still count | World Economic Forum), etc.) and qualitative feedback (surveys on confidence, focus groups on challenges). Be ready to iterate on the training content and approach every 6-12 months as AI technology and employee needs evolve.
  • Engage in Ecosystem Initiatives: Join or form partnerships with other companies, educational institutions, or industry groups to share resources on AI training (Reskilling in the Age of AI). Advocate for broader educational reforms that align with industry needs, such as updating school curricula to include AI literacy. A rising tide lifts all boats – having a larger talent pool benefits your organization too.
  • Prepare for Future Skills: Through strategic foresight exercises, identify emerging skill areas (e.g. AI auditing, AI-human teamwork facilitation, etc.) and start building at least foundational knowledge in those for key teams. This can be done by pilot projects or sending employees to specialized courses.

In closing, the statement that “everyone wants to teach AI skills but nobody knows what they are today or for the future” captures the uncertainty and urgency of our times. Through a comprehensive review, we’ve charted a path to reduce that uncertainty. While we may not predict every skill needed in 2030, we do know that adaptability, lifelong learning, and a balanced skill portfolio will be crucial. The organizations that embrace this philosophy will not fear AI as a disruptor, but harness it as a co-driver of innovation. As CAIO of SYRV.AI , my final insight is this: the true skill we are teaching is not about AI per se, but about how to thrive in partnership with AI. If we can instill that mindset in our workforce – curious, resilient, ethical, and empowered in the face of intelligent machines – then whatever the future holds, our organizations will be ready to learn and lead.

References (Chicago Style)

Jarrod Anderson a lot of great information there and couldn’t agree more on many points. I think the different types of people will connect and learn with AI in many different ways. Now is the time to follow your dreams and learn about anything you want to.

Paul Bratcher

Co-Founder @ UnfoldAI | AI Insights, Technical Leadership

2 周

Jarrod Anderson great post. I get asked this question, I think we are underestimating the skills required around listening, questioning and evaluation of information. I often suggest that without the critical systems thinking skills, domain knowledge it’s going to be hard work to get AI to work for your desired outcomes. If you consider agentic AI implementation it stands at a trifecta of knowledge pools. Organisational design Technical skills Process design It’s a strange combination of shades of decision making blending deterministic thinking and probability outcome predicting.

Kenneth Lassesen

Old school creative statistician Operations Research specialist. Thinks outside of book-learning.

3 周

There is a horrible disconnect between people that knows "how to plug data into a canned AI model" and those who can identify the best model (or invent new one). I saw this constantly while at Amazon and attended their cross-group AI casual meetings.

Jon Salisbury

CAIO - CEO @ Nexigen - Ultra Curious, Humble - Cyber Security, Cloud, Smart City, AI, Quantum, Human Centered, Psychology, Leadership

3 周

love it!

要查看或添加评论,请登录

Jarrod Anderson的更多文章

社区洞察

其他会员也浏览了