AI and the Future of HE - 10th February 2025
Hi
?Hope you’re all well wherever you may be.? It’s cold here in Hanoi - yes, the thermostat says 16° but trust me, weather works different here so that’s ??? We’re talking thermals, heaters, wooly socks, convincing the dog to act as hot water bottle - the lot. Still - it’ll be a fond memory come June/July when the RealFeel breaks 40°
Anyway, enough grizzling - here’s a few notes from the world of AI to kick off your Mondays (edit: Tuesday - sorry for the delay, in my jetlagged state the scheduled post didn't work properly for whatever reason):
???Deep Research Revolution: AI's New Knowledge Frontier | Tech Analysis 2025
Imagine having a research assistant that never sleeps!? That's exactly what OpenAI just dropped with their newest Deep Research tool.? While the naming might be a bit on the nose (yes, Google already has a similar tool with the same name ??), the capabilities are genuinely impressive…? it will do work for you independently - give it a prompt and ChatGPT will find, analyse, and synthesise hundreds of online sources to create a comprehensive report at the level of a research analyst.? Available for Pro users now and Plus/Team users soon, it marks OpenAI’s first foray into the internet-capable research agent space Google Deep Research opened out back in December. These tools are pretty amazing - able to write literature reviews, review papers, and identify gaps in knowledge - doing the equivalent of hours of human work in 10m or less
Some interesting use-cases are emerging already - from excellent primers on a topic to using the o3 model to provide regular updates to human-authored reviews - there is real potential here.? That said, and in the vein of the whole open catching up with closed systems moment that we’re going through, Hugging Face researchers have already effectively cloned this feature - releasing “Open Deep Research” which is already racking up comparable results.
?? AI Governance Milestone: EU's Bold Regulatory Step | Policy Watch
The EU has just taken a major step in governing artificial intelligence by implementing Article 5 of the EU AI Act, which outlines prohibited AI practices.? The guidelines, released in February 2024, tackle everything from subliminal manipulation to emotion recognition in workplaces, with hefty penalties for non-compliance - we're talking up to €35 million or 7% of global annual turnover! The most intriguing part? The regulations specifically target AI systems that could exploit human vulnerabilities or manipulate behaviour through subliminal techniques, but they're careful to maintain exceptions for legitimate uses in areas like law enforcement and medical applications.
For education, this means a dramatic shift in learning technology!? The ban on emotion-inference AI in educational settings will have impact on the entire edTech landscape, pushing us toward more transparent, performance-based solutions.? Instead of guessing student emotions, we'll likely see AI systems focusing on concrete learning outcomes and genuine human interaction.? Think adaptive learning paths based on real performance data, not facial expressions!? This isn't just a regulatory hurdle - it's an opportunity to build better, more ethical learning tools that enhance rather than replace human connection in education ??
???Campus-Wide AI: California's Half-Million Student Experiment | Education Innovation
OpenAI and the California State University system have jumped into bed and the scale is unprecedented: ~500k students, >60k staff and faculty across 23 campuses - all with access to ChatGPT Edu.? Interesting to hear the OpenAI blurb cite “AI-powered universities” worldwide (e.g., Arizona State, Harvard, Oxford, and the Wharton School) as “taking steps to make AI as fundamental to their campus as using the internet”.? Sounds great - provided its structured in some meaningful, constructive way.
领英推荐
Good to see that there are strategies in place to drive this integration: a centralised AI hub for training and PD; training to enhance educator and professional-staffs skillsets in this area; and WIL-sounding workforce training and apprenticeships in partnership with industry leaders.? Will be interesting to watch this whole piece play out - are we headed to a world where institutions commit to a single AI the same way they do Google vs Microsoft, Canvas vs Moodle, Teams vs Zoom, etc.?? And if so, what will that cost us?
???The Human Element: Anthropic's Authentic Voice Paradox | Industry Insight
The AI industry is booming, but here's a hilariously ironic twist - Anthropic, creator of Claude (one of the world's most popular AI writing assistants), is explicitly asking job applicants NOT to use AI in their applications.? Across nearly 150 job listings, from software engineering to comms roles, they want to "understand your personal interest without mediation through an AI system." Talk about practicing what you preach!
The timing couldn't be more fascinating.? Fresh off the schadenfreude of Sam Altman and OpenAI getting rattled by DeepSeek's breakthrough, Anthropic's CEO Dario Amodei declares the AI race "existentially important" - yet simultaneously wants employees who can think independently of these tools! ??? It's a striking paradox that perfectly captures our current tech moment - as AI becomes more sophisticated at mimicking human communication, the ability to think and write independently might just become our most valuable skill.
?? Reality Redefined: ByteDance's OmniHuman Breakthrough | Tech Ethics Spotlight
ByteDance, TikTok's parent company, just levelled up the picture —> video space with the release of OmniHuman, a tool that can create lifelike videos of people (or animals!) doing just about anything from a single photo.? We're talking Einstein giving lectures, cats doing backflips, and virtual influencers who never need a coffee break!? Trained on over 18k hours of human video data, this AI powerhouse can generate footage of people talking, gesturing (the hands look great!), even singing and playing instruments - all from just one static image.
But here's where it gets deeply concerning ?? NYU's Samantha G. Wolfe warns these AI-generated videos are becoming increasingly indistinguishable from reality, especially on phone screens where most content is consumed. The implications for misinformation are staggering - imagine perfectly crafted deepfakes of business leaders announcing fake mergers or politicians making false endorsements.? While ByteDance claims they didn't train on TikTok data and promises transparency measures, the question remains: are we ready for a world where seeing is no longer believing? ??
---
What a ride! From research agents to reality-bending videos, OpenAI's army of students to Anthropic's human-only job apps - we're watching the AI landscape transform by the day. But here's what sticks: While the tools get wilder (hello, OmniHuman! ??), the core challenge remains surprisingly human. How do we harness this tech while keeping education's soul intact?
The future's looking wild, my friends. See you next Monday! ??
This is a fascinating roundup of AI developments, Nick McIntosh!