AI Insights: Focus – EU AI Act

AI Insights: Focus – EU AI Act

It’s Tuesday afternoon, and someone casually asks, “Is the new essay feedback tool we want to trial EU AI Act compliant?” If your first thought is, “I have no idea,” you are definitely not alone.

At my school, we have been working on AI for around two years, training staff on how to use it for lesson planning and other tasks. But here’s the challenge: edtech is evolving so quickly. AI features are being added to tools all the time, and chances are, staff are already using a mix of apps and AI browser extensions that leadership teams might not even know about. It is a lot to stay on top of.

It can feel overwhelming, but the reality is that as a school, we need to get ahead of this.

The EU AI Act is here to guide us in using AI responsibly and ethically. It officially came into force in August 2024, and compliance deadlines start on February 2, 2025. While it might sound like another big task, it is also an opportunity to get organised and create a strong foundation for the future.

Let’s look at some practical steps to help your school prepare.


What Is the EU AI Act?

The EU AI Act introduces regulations for AI systems, categorising them by risk level: minimal, limited, high, and unacceptable. Schools will need to evaluate their current tools, meet specific compliance deadlines, and ensure that staff and students are trained to use AI responsibly and ethically.


Key Compliance Dates for Schools

2 February 2025: First Steps

  • Review your tools and identify any that are prohibited, such as systems that manipulate behaviour or create discriminatory outcomes.
  • Begin staff training on responsible and ethical AI use to ensure compliance with the Act's foundational requirements.

By this date, consider appointing an AI Literacy Coordinator. This person will develop and oversee AI literacy programmes to help staff and students understand the opportunities, risks, and ethical considerations of AI tools.


2 August 2025: Preparing for High-Risk Tools

Start preparing for AI systems that fall into the high-risk category. These tools include systems that impact grades/outcomes, admissions, or other areas that can significantly influence a student’s educational or professional future.


Before August 2026: Addressing High-Risk AI Tools

If your school plans to implement high-risk AI systems, take additional steps to ensure compliance:

  • Provide in-depth training so staff understand how these systems work, their limitations, and the safeguards in place.
  • Conduct risk assessments to evaluate how these tools might affect students’ rights, well-being, and outcomes.
  • Establish transparency by clearly communicating with staff, students, and parents about the tools being used, their purposes, and any limitations.


2 August 2026: Achieve Full Compliance

By this date, schools must fully comply with all requirements for high-risk tools. This includes completing risk assessments, meeting data protection standards, and ensuring transparency and reporting processes are in place.

To meet these requirements, consider appointing the following roles:

  • An AI Compliance Officer, who will oversee risk assessments, transparency, and adherence to the Act's requirements (I would recommend having this person in role from August 2025).
  • A Data Protection Officer (DPO), who will work closely with the AI Compliance Officer to ensure personal data is managed securely and legally.


Understanding AI Risk Levels

Understanding AI Risk Levels

The EU AI Act categorises AI tools based on their level of risk. Here is a quick overview:

Minimal risk: These tools are generally harmless, such as spam filters or simple educational games. You can use them with minimal oversight, but periodic checks are a good idea.

Limited risk: Tools like lesson planning assistants, chatbots for general questions, or adaptive learning platforms fall into this category. They require user awareness and regular checks for bias or inaccuracies.

High risk: This includes any AI system that significantly affects a person’s educational or professional trajectory. For example, grading systems, admissions tools, and even AI detectors could be high risk if they influence academic outcomes.

  • Example: A student-facing chatbot that provides general support is considered limited risk. However, if the same chatbot is used for evaluative purposes, such as assigning grades or influencing academic decisions, it becomes high risk.
  • Why this matters: In many European countries, universities often require internal grade transcripts from Year 10 onwards. If AI is used to influence these transcripts, it directly impacts students’ futures and must meet strict requirements.

For high-risk tools, schools must conduct detailed assessments, monitor performance regularly, and be fully transparent about their use and impact. I'm going to write another blog about my thoughts on my own use of student-facing AI, and if it does have an high-risk impact (more to come on this).

Unacceptable risk: These tools are banned outright. Examples include systems that manipulate behaviour without user consent, discriminate based on sensitive characteristics, or lack appropriate safeguards for biometric data.


Adopting a Sandbox Approach

The EU AI Act encourages the use of regulatory sandboxes, which are controlled environments for testing and validating AI systems. Schools can adopt this method to experiment with new tools while maintaining oversight and compliance.

In my classroom, I have been running an informal sandbox for the past two years. This has allowed me to trial new tools and refine practices. With the EU AI Act now in effect, it is time to formalise this approach.

A sandbox allows schools to:

  • Test AI tools in real-world settings, such as within specific classes or student groups.
  • Collect feedback from staff and students to identify issues and refine the tools.
  • Partner with other schools or technology providers to share expertise and resources.

This approach ensures tools are safe, effective, and ready for wider adoption.


Practical Steps Schools Can Take Now

Here are some quick, actionable steps to get started:

  1. Create an inventory of AI tools in use. You might be surprised by what staff are already using, including unapproved tools like browser extensions. Does your leadership team have a full map of these?
  2. Identify high-risk or prohibited tools. Review your inventory to flag tools that need closer scrutiny or immediate removal.
  3. Assign key roles. Appoint an AI Literacy Coordinator to lead training and an AI Compliance Officer to oversee risk management and compliance.
  4. Begin staff training. Build a foundational understanding of AI, covering how it works, its benefits, and its ethical considerations.
  5. Establish a review process. Regularly monitor AI tools for new risks or changes, ensuring they remain compliant and effective.
  6. Trial tools in a sandbox environment. Test new tools in controlled settings to gather feedback and address any issues before full implementation.


Supporting Each Other

The EU AI Act might seem like a lot to take on, but it is also a unique opportunity for schools to lead the way in using AI responsibly. It offers a chance to enhance learning through AI while safeguarding students' rights and well-being.

It is inspiring to see Clara Lin Hawking and Darren Coxon from Kompass Education stepping up in this space. I feel they will make a significant difference.

As someone deeply interested in AI in education, I understand how much there is to learn and adapt to. Global statistics show that most teachers have not even received basic AI training, let alone support in AI literacy or compliance. This highlights how much work still needs to be done.

This is not a journey we can take on our own. Collaboration, shared insights, and mutual support are essential to ensure that AI benefits everyone in education.

What steps is your school taking to prepare for the EU AI Act?



Paul Hylenski

The AI Leader | Founder, Vet Mentor AI | 3x TEDx Speaker | Best-Selling Author | Director, ST Engineering (MRAS) | Founder, Quantum Leap Academy

1 个月

Matthew Wemyss So awesome to see you crushing it with the AI and education. Really excited to see someone else showing people that AI can be used for good. Keep up the good work.

Great read! Not much to add other than, I really appreciate your attention to details that are obvious the ’educators on the ground’ but not always to policymakers. Schools will learn safe use of AI, get though all the mandated training, and into a future with better AI in education. Thank you for writing this and for the tag.

John Dolman

The AI English Teacher - Teacher of Media Studies @ Ponteland High School. Former Head of Languages and Cultures Faculty @ PRINCE OF WALES ISLAND INTERNATIONAL SCHOOL | MEd, AST.

1 个月

Brilliant piece - thank you for sharing your expertise Matthew Wemyss.

Darren Coxon

AI Governance, Training and Tools for Safe Innovation in Schools and Colleges.

1 个月

This is great! And thanks for the mention. Good to see others taking such a thoughtful approach to such a vital topic.

要查看或添加评论,请登录

Matthew Wemyss的更多文章

  • AI Insights Focus - "Feed Me, Seymour!"

    AI Insights Focus - "Feed Me, Seymour!"

    Picture Audrey II from Little Shop of Horrors, but instead of demanding body parts, it’s eagerly devouring your data…

    18 条评论
  • AI Insights Focus - Alexa+: The Future of Digital Assistants, But is Everyone Ready?

    AI Insights Focus - Alexa+: The Future of Digital Assistants, But is Everyone Ready?

    Technology is always moving forward, whether we like it or not. Next month, Amazon will launch Alexa+ in the U.

    4 条评论
  • AI Insights: Focus - "Being Human"

    AI Insights: Focus - "Being Human"

    I loved the show Being Human when it first aired in the UK. It wasn’t just another supernatural drama; it was something…

    7 条评论
  • AI Insights #22

    AI Insights #22

    After a short break last week, AI Insights is back. But that doesn’t mean there was a pause in the conversation.

    5 条评论
  • AI Insight #21

    AI Insight #21

    Could spoken assessments replace written ones? During a BETT Fishbowl panel, Al Kingsley MBE led a discussion where…

    4 条评论
  • AI Insights #20

    AI Insights #20

    Motivation. It’s what keeps us pushing forward, sparking curiosity and turning effort into achievement.

  • AI Insights #19

    AI Insights #19

    I was out of school this week, so I don’t have my usual AI insights from the chalkface. However, I can tell you that AI…

    5 条评论
  • AI Insights: Focus - The EU AI Act's Hidden Challenge: Could It Push Educational AI Tools Out of Classrooms?

    AI Insights: Focus - The EU AI Act's Hidden Challenge: Could It Push Educational AI Tools Out of Classrooms?

    I have a tendency to overthink. I’ll admit it.

    9 条评论
  • AI Insights #18

    AI Insights #18

    Welcome to edition 18 of AI Insights! This week, I’m sharing reflections from my Year 12 students on how they’re using…

    3 条评论
  • AI Insights #17

    AI Insights #17

    Happy New Year, and welcome to the first edition of AI Insights for 2025! It’s a short but busy three-day week for me…

    5 条评论

社区洞察

其他会员也浏览了