Newsletter #6
Welcome to Edition Six of the Edvance AI Weekly Digest.
This week’s edition is packed with the latest AI updates to keep you informed.
As always, the full newsletter is available on Substack (subscribe here to get the complete version + article links in your inbox every Friday), but for LinkedIn convenience, here’s the condensed version.
Enjoy the read!
TL;DR for this week:
(Remember to click the underlined, bold sections of the rest of the newsletter to get the sources & all the links you require.)
? Microsoft and G42 launch AI centers in Abu Dhabi, focusing on responsible AI.
? "Humanity's Last Exam" invites public input for challenging AI.
? OpenAI's safety committee shake-up raises oversight concerns.
? Slack introduces AI productivity tools, and Snap reveals AR glasses.
? World Labs, led by Fei-Fei Li, develops AI models with spatial intelligence.
??All the latest in AI this week …
- Microsoft and UAE-based AI company G42 are opening two centres in Abu Dhabi focused on "responsible" AI initiatives. This follows their $1.5 billion partnership announced in April, showcasing the UAE's ambition to become a global AI leader. The first centre will develop best practices in responsible AI, while the second will focus on tasks like creating large language models for "underrepresented languages".
- Despite the emphasis on AI development, these centres are not yet operational. The companies aim to ensure "generative AI models and applications are developed, deployed and used safely" in the future. This move comes amid regional competition and addresses US concerns about technology transfers to China. G42 has divested its Chinese investments and confirmed it doesn't conduct business with US-restricted entities, but the timeline for rolling out specific AI features remains unclear. As ever, some of the most exciting times to be alive in!
- The Center for AI Safety (CAIS) and Scale AI have launched "Humanity's Last Exam," a global initiative to create challenging questions for AI systems. This response comes as the latest models, like OpenAI's o1, easily pass existing benchmarks, making it difficult to differentiate their capabilities. The project aims to develop a test that could stump even highly intelligent language models for years, shifting focus from simple task completion to true reasoning assessment (almost like a modern-day AI Turing Test, but with YOUR input).
- Interestingly enough, the initiative is calling for public input to gather the most difficult problems across various fields, including math, rocket engineering, and analytic philosophy. Questions should be original, objective, and challenging for non-experts (click here to see all the details and to contribute).?
- By developing this comprehensive, long-term benchmark, the project aims to maintain a clear distinction between human and AI cognitive abilities, ensuring more accurate assessment of AI progress in complex reasoning tasks, but how long will these “last exams†actually last?
- OpenAI CEO Sam Altman has stepped down from the company's safety oversight committee, which will now operate "independently" with existing board members. This move follows a tumultuous year for OpenAI's leadership and increasing scrutiny over its security practices. The committee, led by Chair Zico Kolter, includes tech industry veterans and is tasked with reviewing OpenAI's safety processes. However, the committee's independence has been questioned, with former employees criticising Altman's previous involvement and citing concerns about marginalised safety considerations.
- In response to an increase in pressure, OpenAI has significantly increased its federal lobbying efforts, spending $800,000 in the first half of 2024 compared to $260,000 for all of last year. The committee will continue to getting reports on technical assessments for current and future models, as well as ongoing monitoring. It has already been involved in a safety review of OpenAI's newly launched Strawberry release. Questions, of course, remain about who will determine what constitutes "valid criticisms" as the push for AI safety regulation gains momentum.
领英推è
Keep an ??? out for these - up & coming innovations…
- Slack is rolling out a suite of AI features to boost productivity, including Agentforce Agents for seamless CRM data access and sales training. The update also introduces third-party AI agents from companies like Cohere and Adobe, with more partnerships planned. Existing features get a makeover too: Workflow Builder now has a chatbot interface for easier automation, while Slack AI can extract key information from meetings and organise it into canvases.?
- Snap, the company behind Snapchat, has unveiled its fifth-generation Spectacles AR glasses, showcasing a leap in augmented reality tech. With hand gesture controls and partnerships with tech giants like OpenAI and Lego, Snap is positioning itself as a developer-friendly AR platform. Available to developers for $99 monthly here, these Spectacles signal Snap's ambitious push into the AR frontier, potentially reshaping how we interact with digital content in the physical world. Could these be a contender to Meta’s AI-powered Raybans?
- World Labs, a new startup co-founded by the “godmother†of AI Fei-Fei Li, has secured over $230 million in funding to develop AI models with spatial intelligence capabilities. World Labs aims to build large world models (LWMs) that generate interactive 3D worlds from image data, potentially revolutionising fields such as autonomous vehicles. This signals a growing interest in Spatial AI, which combines AI with geospatial data to enhance understanding of the physical world. (THIS, personally, is what excites me about the future of AI.)
?? AI Action Corner
This week, let's focus on AI's role in personal growth and self-reflection - enhancing productivity at work is not the only place AI can be utilised for.
Here's a "Self-Improvement with AI" challenge to explore:
- Journaling Assistant: Use an AI writing tool to prompt daily reflections. Ask it to generate thought-provoking questions about your goals and experiences.
- Skill Gap Analysis: Input your current skill set and dream job description into Claude AI. Ask it to identify skills you might need to develop.
A really important point: ensure that you are keeping personal data secure - do NOT include deeply personal pieces of information when creating prompts or interacting with AI.
For the rest of the content, including AI sources of inspiration (+ BONUS information), subscribe here.
PS. If you’re interested in working with Edvance AI (an AI upskilling consultancy supporting businesses with effective AI implementation) or want to find out more, drop me a DM (our website is currently under construction).
PPS. If you’re a STEAM (science, technology, engineering, arts and maths) student from the any part of the world and want to join our ever growing community at Based In Science, also DM me. We would love to have you.
Tech Evangelist | Early Adopter | Helped Build Unicorn Startups | Curious about Everything Tech | Constantly Innovating
5 个月?? I'd love to dive deeper into Microsoft & G42's partnership for responsible AI in the UAE! Exciting times ahead.
DVM ||GMPH,Senior livelihood at Women and pastoralist youth development Organization (wa-pydo) and Food safety expert at SRS Pastoral Development Bureau.
5 个月Very informative
I build AI agent ecosystems for businesses to help them dominate
6 个月Hi Ideja Bajra, what's the secret behind cell biology to AI ??
Growing Boardy | CEO and Co-Founder PhotoDripAI | Perplexity Business Fellow
6 个月Snaps AR glasses ??