AI Expert! vs. AI Expert: How School Leaders Can Spot the Difference
The shock of how much money can be made in AI and education on the back of advanced AI has brought many different voices into the conversation in the last twenty months. Some are sharks circling for a quick profit. Some are dolphins, excited by the glittery possibilities and eager to try new things. And then there are the blue whales. These are the ones with the depth, the reach, the experience, and the weight of knowledge to guide schools safely through this shift. They were there long before ChatGPT became a household name.
Spotting the difference in an ocean of choices, and understanding where to invest resources, can be challenging, especially since many self-proclaimed experts present themselves with confidence and authority. But when it comes to the matter of AI Governance it is advisable to take a closer look before relying on their guidance. School leaders face increasing pressure to integrate AI while ensuring compliance, safeguarding, and ethical implementation. The challenge is not just adopting AI, but doing so in a way that protects students, staff, and institutions.
The difference between an AI Expert! and an AI expert in AI governance is not just punctuation. It is a matter of credibility, accuracy, and the safety of students and staff. Very few people have real expertise in AI governance in education, education policy, compliance, and risk management. That makes it all the more important to ensure that the guidance schools receive is built on experience and a deep understanding of regulation, ethics, and technical realities.
School leaders must distinguish between enthusiasm and expertise when seeking guidance on AI governance, compliance, and risk. Someone speaking confidently about AI in education is not necessarily an expert. Be aware of sweeping claims that lack references to legal frameworks, policy discussions, or technical realities. Be cautious when advice is vague or overly simplistic, especially when it comes to compliance and student data protection. If someone cannot provide clear answers backed by real-world experience, their guidance is not just unhelpful. It is risky.
An AI Expert! is often an early adopter who enjoys experimenting with AI in the classroom. That is valuable, but excitement around AI in education does not make someone an AI Expert in safety, compliance, regulatory obligations, or ethical AI implementation. Governance and compliance are not about experimenting. They require a foundation in legal frameworks, risk assessment, school policy, and AI ethics.
A major red flag is reckless enthusiasm. If someone introduces an AI tool and encourages teachers to set up private accounts or just experiment, that is a problem. The first step should always be to go to the head of school, group, or district for approval, and then work with leadership, IT, and compliance officers to set up a safe pathway for experimentation. Every AI tool used in a school must be reviewed to ensure that it does not create safety risks, privacy risks, or legal violations. Schools worldwide must ensure compliance with applicable regulations. AI in education requires structure and oversight. Schools cannot afford to rely on casual advice.
A lack of clarity is another warning sign. If an AI Expert! does not have a technical understanding and cannot clearly explain how AI tools process and store student and staff data, what legal risks schools face under the EU AI Act or other regulations, and what specific detailed steps are required to remain compliant, then their advice is not just unhelpful. It is dangerous. AI governance is concerned with verifiable facts and accountability, not opinions. If someone claims to be an expert but is not involved in global regulatory discussions, does not contribute to policy forums, and has no background in AI governance frameworks, their credibility should be questioned.
Another warning sign is the tendency to publish a book rapidly without external review or subject matter expertise. In the past, publishing a book or article was a mark of authority, a sign that the author had undergone rigorous scrutiny and contributed meaningful knowledge to the field. But today, anyone can self-publish a book or flood online spaces with articles, and publication alone is no longer proof of expertise.
A growing number of self-published works on AI governance contain inaccurate, misleading, or entirely incorrect information. At a minimum, any book on AI governance, compliance, law, or ethics should be reviewed by subject matter experts before being considered credible. Publishing an authoritative book on these topics, one that is well-researched, peer-reviewed, and grounded in expertise, takes at least 12 to 24 months, sometimes longer. Frequent AI articles, even in well-known publications, that lack citations or grounding in regulation, law, or best practices should also raise concerns. Schools should be skeptical of anyone who appears to prioritize visibility over accuracy.
AI governance is not a quick fix. AI integration takes careful planning. AI literacy requires time and structured learning. Compliance is an ongoing responsibility. Keeping a school safe demands diligence, not shortcuts. This work requires precision, and it takes years of training and practice to understand.
Schools need to get started with building their AI governance frameworks right away, but rushing to grab just anyone for guidance can lead to costly mistakes.
AI governance should be treated with the same seriousness as safeguarding efforts. Just as schools implement clear protocols to protect student safety, well-being, and data privacy, AI governance requires structured policies, expert oversight, and an evidence-based approach to prevent harm. The risks of poorly managed AI implementation, whether in biased algorithms, data security breaches, or legal non-compliance, are real and must be approached with the same diligence as other safeguarding responsibilities. Schools must get it right. There are legal, ethical, and technical implications that take time to learn. Enthusiasm is not enough. It takes experience from a steady course.
If you are a school leader and you are asking, How do I actually ensure that my students, teachers, and staff are safe using AI? and the answer is vague, speculative, or overly simplistic, then you are not speaking with an expert. The guidance you follow must be grounded in real knowledge.
The waters of AI in education are full of movement. Some voices make a lot of noise. But when it comes to protecting schools and making informed decisions, it is the blue whales who matter. They are the ones with the depth to get it right.
AI Expert! Red Flags
An AI Expert, What to Look For
Author: Clara Hawking
Clara is the Co-founder and Executive Director of Kompass Education , specializing in AI governance, compliance, and ethical AI implementation in schools.
Learn more and access resources at: https://www.kompass.education/
Founder & CEO at InfAI | Lawyer specialising in AI development and deployment | Juris Doctor | LL.M (ICT Law) | M.A (Philosophy)
1 天前A great read, Clara and thank you for sharing. Two comments I'd like to add here: 1. The real challenge isn’t just explaining AI governance—it’s getting people to care. That’s why your piece resonated. Because the conversations that matter aren’t abstract debates; they’re about getting leaders to ask the right questions in the first place. Questions such as the one you pose here- ‘How do I actually ensure that my students, teachers, and staff are safe using AI? But, when you're up against the allure of the magic and fabled AI quick fix, that can be hard. 2. On a personal level, stepping from academia and law into the start-up world has meant rethinking how I communicate insights. In the former, it’s ‘reference or die’ and an unsubstantiated claim is as useful as a chocolate teapot. In the latter, I quickly realised—often mid-sentence—that 99.9% of people outside of the AI governance field do not want to hear about the ongoing debates surrounding the EU’s AI Act over their morning coffee. I can now pinpoint the exact moment when someone’s eyes glaze over. It happens somewhere between "regulatory" and "compliance." (but maybe that's just me!)
?? I Help EdTech & AI Startups Grow with High-Impact Content | White Papers, Case Studies & Thought Leadership | DM ‘AI EdTech’ to Connect
1 周Clara Lin Hawking I really enjoyed reading this article because it touched upon many aspects to AI governance and how some films are jumping the gun and asking teachers directly to pilot their own AI research projects without approval from the school district or board. Another example of what I found interesting is your description of the different types of people that are entering this field without the know how and the degree of knowledge that comes along it it. Your description of whales, sharks and dolphins I found was very interesting and poignant. Thank you for such an insightful read as I find myself to be in the early stages of learning about generative AI and I have taken some courses on LinkedIn to broader my understanding of AI
K-12 Learning Innovator | Eclectic Inquisitive | Educational Technology Amalgam | High Octane Neuroplastic Divergent, GLADiator
1 周Thank you for this article. The consequences of inadequate data safety are far-reaching and cause a ripple effect. The true damage is often not felt for years.
Chief Executive Officer at Blackthorn Vision
1 周Insightful points, Clara Lin! ?? What defines "real" expertise?
International consultant and expert advisor in education, AI, sustainability citizenship, disruptive technologies. HC for C?te d'Ivoire to Ireland. Sainchomhairleoir & Consal Oinigh don Chósta Eabhair go héirinn.
1 周Again the #aidebate must be continuous and ever evolving as is #AIoT. I would openly debate certain aspects of 'spotting the spoon-full-of-sugar', so called AI experts, and yet AI Governance in Ai Leadership cannot be in anyway compromised, ever. When I set about publishing my Blogs 1- 9a - 10, the importance always lay in ensuring their foundation in current, global research and evolving policy. Citations and renowned frameworks, AI procedures, trusted sources and exemplars play a central role in furthering the #aidebate or what I refer to as the #Elephant-In-The-Room. I totally agree with Irene Stone and her points. Children's lives can never be compromised by negligence, lack of validity, or ignorance of the facts. Leadership must also be guided by proven Governance initiatives/ procedures in education. I know I know nothing, as AI experts must surely earn that title over time with a proven and tested record? 50 years of AI YES, and yet to many people AI has just arrived on Planet ?? and is seen as a tool to play with.... tools can be, and in the wrong hands, are dangerous. INFORMED #choices and #options ... always. Beware the #SpoonFullOfSugar AI tools and applications as #diabetes and #obesity may well become side-effects.