9 O'CLOCK NEWS - April

9 O'CLOCK NEWS - April

Welcome to our latest monthly roundup, where we deliver the freshest technology and culture insights from our team straight to your inbox.

It's all about AI and spatial computing this month as we explore the latest updates from Apple, Meta and NVIDIA, plus we fight the corner for why technology can be a vital educational tool.

Let's dive in.


Image credit: Meta

01. META’S SCENESCRIPT USES MACHINE LEARNING TO BETTER UNDERSTAND REAL WORLD ENVIRONMENTS

Meta's Reality Labs has developed SceneScript, an AI tool that can understand and interpret a physical space based on camera feed data, allowing developers to create even more immersive spatial experiences for audiences.

Existing as a research project only for now, theoretically SceneScript could take data from a headset camera feed and use a machine learning model to determine what’s in a user’s space - predicting architectural tokens such as Wall, Door, Table etc. This level of contextual understanding is not currently possible when working with mixed reality headset camera feeds - a topic we’ve been exploring recently as part of our spatial experiments series - but it would certainly be a gamechanger for users and developers.For the user, environments can be more easily scanned and instantly understood by the SceneScript model, making the mixed reality onboarding process much smoother. And for developers, this tool opens up numerous opportunities to make spatial experiences more immersive than ever. By leveraging object recognition and state information, we would be able to tell that the dark mesh shape in the user’s room is in fact a chair, and adjust its properties to transform it into a virtual gold throne. Or we could tell that the space to the left of the chair is in fact an open doorway, giving us an entry point to send virtual characters into the room.?We hope to see this research project evolve into a usable tool soon - it could be a milestone moment allowing brands to create tailored spatial experiences with a much higher level of environmental adaptability.



02. APPLE’S NEWLY-ANNOUNCED REALM ADDS TO THE CONTINUING PROLIFERATION OF AI

Apple just announced their new AI model, ReALM - which promises to make Siri smarter by helping it understand conversational and background context.?

One of the biggest issues with AI chatbots up until now has been their inability to understand context, often not knowing what reference words like “it” and “that” are referring to - but ReALM will change that, allowing Siri to infer those details. There are suggestions it could even outperform Large Language Models like GPT-4 (the artificial brain behind ChatGPT), being able to decipher information from screenshots and other on-page images - which could be pretty useful day-to-day.

The announcement comes ahead of the launch of iOS 18 at WWDC in June, where we predict Siri 2.0 will be a huge topic as Apple pushes to make AI a core part of their business offering. This move is sure to impact their ecosystem - we’re excited to see how they plan to integrate AI into their key product offerings, from mobile through to spatial. And above all else, this is yet another indicator of the proliferation of AI. It’s not just another tech trend to base an advertising campaign around for clout - it’s a utility that’s being integrated into everything around us and adopted by audiences as the norm. Brands need to adjust their view on AI if they haven’t already.



Image credit: Duolingo

03. IS A RELIANCE ON AI FOR EDUCATION BAD FOR OUR BRAINS??

A new study has found a link to memory loss and falling grades amongst students who rely on ChatGPT for school work. But can tech-led assistants still provide educational support?

An early but fascinating exploration of the swift impact that Large Language Models have had in education has found a link between AI assistants and memory loss. Unsurprisingly, the researchers found that students under a heavy academic workload and time pressure were much more likely to use ChatGPT during their studies. They observed that those who formed a reliance on ChatGPT reported higher procrastination, memory loss and a drop in GPA.

If students can easily gain access to answers without having to engage brain power, these results are expected. The same has been said before about calculators and even computers. But the reality is, technology can actually be incredibly beneficial for education - when used correctly. A recent article from UNESCO highlights that generative AI can be used to adapt educational content to suit individual learning interests, pace and abilities, catering for all-important diversity both in and out the classroom. AI assistants built to support learning with the right mechanics and frameworks in place can also be hugely impactful - take Duolingo for example, with its focus on repetition and gamification hugely aiding language learning.?

The power of mixed reality tells a similar story - interactive AR has the potential to be 150% more effective in teaching students new information and skills versus passive content. Our own research initiatives have revealed similar insights, with our work for Meta and key institutions including The V&A and NASA helping to improve history, science and nature education for audiences using interactive AR social filters.

When harnessed correctly, technology can prove to be an invaluable tool for education. But the key is for brands and institutions to create an experience where the tech is used to elevate learning through interactivity, game mechanics and other frameworks, rather than being relied upon as an instant data supply.



04. NVIDIA IS USING AI TO MAKE VIRTUAL CHARACTERS MORE HUMAN

As demoed at the Game Developer’s Conference in San Francisco, the tech giant’s generative AI ‘digital human’ tools are being used to voice, animate and write dialogue for avatars including non-player characters (NPCs) within games.

NVIDIA’s ACE (AI Character Engine) allows NPCs to respond to players in unique ways, generating real-time responses that fit the live gameplay - an exciting advancement in creating instantaneous interactions which feel conversational and authentic.?

There could be a number of ways for brands to utilise this tech to boost their own digital avatars and make them more human across a number of applications, from game characters to celebrity chatbots and even virtual assistants. We’ve been doing some thinking lately around the new era of fan-idol dynamics and how AI is impacting the way audiences can connect with their favourite virtual characters - we feel NVIDIA’s tools could very well open up new ways for fans to feel even closer to their idols. Watch this space.



05. WILL SONY’S MIXED REALITY CONTROLLER MAKE APPLE RETHINK ITS HANDS-FREE APPROACH?

Sony’s novel device, designed for use with its upcoming mixed reality headset, caters to precise 3D interaction and manipulation and could give users more control and enjoyment when it comes to spatial experiences.

Apple’s bold decision to opt for eye and hand tracking with the Vision Pro is a paradigm shift for facilitating lightweight, intuitive interaction, but they may very well need to support immersive hardware input sometime in the future. Sony’s take on how to serve up ‘pro’ style controllers could either point the way forward for the Vision Pro or show a clear division in company strategies.?

Despite its intuitive appeal, the handsfree Vision Pro experience lacks haptic feedback, which isn’t great for establishing physical relationships to digital objects. Plus, using an empty hand to ‘pinch and drag’ doesn’t provide the sort of precision grip you could get from tracked controllers - an important point for professional users creating 3D content or carrying out engineering tasks. But for those using the Vision Pro for personal productivity and requiring only light interaction with applications, hand and eye tracking is a revelation.?

According to a patent application from Apple, a Pencil-esque device could be on the cards. We’re intrigued to see which way Apple goes - will they stick by their guns and pave a new path for UI, or concede that a carefully designed controller would provide greater precision and tactility??

For us, a hugely important aspect of successful spatial experiences is their ability to make things multisensory with multimodal inputs. We feel that there are clear use cases and opportunities for both methods of control, but brands should consider what type of experience they want to create first before deciding on the hardware. This will help them facilitate the right user interaction for their audience.



AND ONE MORE THING...

Over the past couple of months, we've been experimenting.

Since launching Spatial by UNIT9 and?crafting a set of Experience Principles to keep us grounded in a spatial-first mindset, we’ve been getting down to the real fun, carrying out a little R&D to bring each principle to life. The results of our first three experiments are now out in the world - from exploring how spatial can help us connect people in new ways, to testing out how we can turn a physical room into a responsive play space.

See what we've been up to over on our dedicated spatial microsite.


THANKS FOR READING! SEE YOU NEXT TIME :)


要查看或添加评论,请登录

UNIT9的更多文章

社区洞察

其他会员也浏览了