Beyond Screens: The Future of Computing and Content with Mixed Reality and Spatial Computing by Quinn JK Banks, M.B.A.
Quinn Banks, M.B.A.
Cyber Strategy Consultant | Emerging Tech | XR Advocate |Thought Leader | Entrepreneur | Innovation Advisor @ XRSI | MBA
Now that the hype of the Metaverse has died down significantly and many people have turned their attention to Generative AI and its ken, the focus on creating a compelling XR experience has taken on a new sense of urgency for a few organizations. Apple has introduced a Mix Reality offering that is on the verge of changing how we engage with every device with a screen as its main interface. ?
In this paper, I will talk about mixed reality (MR), Apple’s vision of Spatial Computing, and how we might see the beginnings of something that could significantly impact how we engage with every device with a screen today.
Before taking the plunge, let us first establish some basic understandings of what the term XR means and the state of MR today. This paper will focus on the potential impact of Mixed Reality on various industries and consumers.?
Apple does not endorse this perspective. The author does not have access to inside information about products being developed by Apple or any other organization. These ideas come solely from the author's creative mind, industry observations, extensive mobile app/game development experience, and perhaps too much sci-fi.
The best way to start is to examine what XR means and some of the technologies influencing it today.
Extended Reality (XR) is the umbrella term for experience-based technologies: Augmented, Virtual, and Mixed Reality. XR includes real and virtual environments generated with computer technology and wearable devices to provide total and semi-immersive experiences. Each one offers a degree of augmentation. In some versions, the users can bring digital objects into reality; in others, they can see physical things on the digital screen. All of which are interactive to various degrees.??????????????????????????????
Virtual reality (VR) is an immersive and interactive computer-simulated environment experienced in the first or third person and provides users with a strong sense of presence. In other words, the user strongly feels that they are not only in the computer-generated virtual world, but they can also interact with objects and people in the virtual world.
The critical point is how we interact with this virtual environment. Many interactions can be experienced via a headset, which is essentially a high-definition screen attached to your face (by way of a headset), or multi-projected environments, which cover a much more significant portion of your field of vision, immersing the user in a virtual, 360-degree world. To further engage the user in the virtual world, hardware and software developers are incorporating Spatial Audio.? At its core, Spatial Audio is a technology that uses dynamic head tracking, detected via the VR headset, to deliver 360-degree real-world sound to the experience.
This is the easiest way to think of VR: a separate and artificial world designed to change your reality and immerse you in it.
Augmented Reality (AR) is an immersive and interactive environment in which virtual content is spatially registered to the real world and experienced in the first person. This virtual /real-world combined experience is achieved via a smartphone, a set of AR Glasses, a desktop/laptop, or even a headset. With AR, you can always see what is right in front of you, but with an added virtual layer.
?Overall, you can think of augmented reality as a platform that delivers contextually relevant information automatically integrated into our perception of the physical world.
Mixed Reality (MR) is the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real-time. Mixed Reality takes place not only in the physical or virtual world but a mix of reality and virtual reality, encompassing both augmented reality and augmented virtuality.
While MR can currently be experienced on smartphones and headsets, the future of MR is even more exciting. Consider, for example, how MR could be used to enhance tourism. Imagine visiting a famous monument, like the Colosseum in Rome, and seeing a virtual overlay that allows you to view the structure as it would have looked in its heyday. With MR, you could see the structure in its original state, the paint color, banners, and landscaping without making any physical changes.
?The best way to imagine the future of Mixed Reality (MR) is by looking at examples like those seen in movies. Take Marvel's Iron Man[1], for example. In the movie, the protagonist, Tony Stark, uses MR technology to create virtual objects and environments he can interact with as if they were real. Using MR, he's able to quickly prototype new inventions without the need for screens or monitors.
What if we took the components of the AVP and turned them inside out? Now, instead of putting the device on your head for you to experience a virtual environment, what if we put the device on the table and beamed content within view of the user without using a headset? - Quinn JK Banks
Spatial Computing
Now, take the definition of XR from above, the concepts it refers to, and the potential impact of MixedReality's use cases, and you begin to see why Apple entered this arena when it did. Many current headset manufacturers have introduced MR headsets and AR headsets, but at their core, these devices are focused on just creating an extended experience. Apple took the first step of this soon-to-be very interesting journey in June 2023 when they introduced the Apple Vision Pro to the world. However, Apple did not refer to their new product as a Mixed Reality device. They call it a Spatial Computer. Many analysts then speculated that Apple did this because this device is a standalone computer system and is essentially the beginning of a new product line. Others speculated that this is one piece in a massive ecosystem they are creating. For many people, this was their first time hearing the term Spatial Computing. However, this term has been around since 1985 to describe computations on large-scale geospatial information. However, in 2003, Simon Greenwold, a researcher at MIT wrote a paper on Spatial Computing[2]. His paper defined Spatial Computing as “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” To simplify this, we are talking about a device that can create digital objects and project them in the real world where we can interact and are also impacted by the environment.
The device we are talking about is the Apple Vision Pro, which combines VR and MR to create different virtual environments. Remember, we are still discussing a device you strap to your face. What if we took the components of the Apple Vision Pro and turned them inside out? Now, instead of putting the device on your head to experience a virtual environment, what if we put the device on the table and beamed content within view of the user without using a headset? Ok, so hold that image in your head and consider. The current smartphone is reaching the end of its lifecycle as each release sees fewer and fewer groundbreaking features. Smartphones are getting thinner and faster, and the screens are getting brighter and sharper, but the form factor is the same. Short of them becoming transparent, which is probably not too far in the future, they are limited by what they can display on a screen, which is 6.7”.?
领英推荐
Instead of looking down at the small screen, why not project the content right in front of you? Quinn JK Banks
Smartphones have come a long way from their humble beginnings as a computer, phone, music player, GPS device, etc. However, today, it is a content-consumption powerhouse. What if content is projected right in front of the user instead of them looking down at the small screen? With the latest wireless connectivity, it can display anything you want on any surface or in the air right in front of you. This could revolutionize how we consume content and eliminate the limitations of a small screen. It could be used as a personal productivity assistant at work, a home entertainment system, a virtual classroom, or even an interactive game.
This new smartphone would project your content right in front of you using low-powered lasers, gesture interaction, and an AI to interact with. By combining this technology with gesture and eye-tracking controls, voice input, and a haptic interaction that stimulates nerve impulses (BCI), we begin to see the foundation of a completely new way to interact with content. This will, of course, lead to new types of content to fill this new space, which is an excellent topic for another paper.
This tech could extend far beyond personal devices. So, instead of static signs and lights, we'd have interactive surfaces that are dynamic and responsive to the user. And the surfaces could even be aware of environmental factors, like visibility, or the user's needs, like color blindness. That could be a game-changer for the visually impaired, too. Imagine surfaces that can be "felt" using something like haptic feedback!
In the future, augmented spaces may be commonplace. Mobile devices, powerful room-based systems, and sophisticated sensors could work together to provide immersive and responsive experiences. This might allow us to live in an environment that adapts to our presence and preferences. For example, in the movie Her[3], the main character's apartment is equipped with voice-activated sensors, body language recognition, and strategically placed projectors. This allows the character to interact with the system using natural language and intuitive movements. The game he is playing is not simply projected onto a wall but is, instead, a holographic environment created by multiple projectors, making the room itself part of the game world.
?
Apple's expertise in hardware, software, user interface design, and branding makes it a potentially strong player in the MR space. They could leverage these strengths to create a comprehensive ecosystem for MR, making it easier for users and developers to access and create MR experiences. Their focus on design and user experience could help them create a more intuitive and user-friendly MR experience, which could drive the adoption of this technology. However, I don’t think they can do this alone, so I am hedging my bets on small, unknown companies, pushing the boundaries far beyond what we, the users, can see today.
Conclusion:
For XR and, more specifically, Mix Reality and Spatial Computing to be widely accepted by the average person, we must enhance the user experience beyond clunky headsets and glasses. The idea of completely immersing a person into a fully digital experience is just the tip of a broader experience that expands beyond the form factor of a mobile device, tablet, flat/curved monitor, or the 65” flat screen TV most people have in their living rooms. To do this, we must think about how we fill the space right in front of a person's reach without needing to wear a device on our heads. The goal of creating a Ready Player One [4] experience where everyone wears a headset to enter a fully virtual world is possible. However, the opportunity for people to adopt a simi virtual world without needing a headset or glasses has a greater chance of mass adoption.?
The potential of this technology, like the concept we've been discussing, is vast and could impact many aspects of our lives, from how we interact with the world around us to how we work and play. While this is only a brief overview of the possibilities, it's clear that the implications are far-reaching and exciting. The concept may seem like science fiction, but with the rapid pace of technological advancement, it may not be as far-fetched as it seems.
About the Author:
Quinn is a seasoned professional with over 15 years of experience developing and implementing digital solutions that enhance user experience and satisfaction. He is currently an Innovation Advisor at XRSI, a non-profit organization that promotes responsible and ethical use of emerging technologies such as metaverse, augmented reality, mixed reality, and virtual reality.
Quinn holds an MBA from Thunderbird Garvin School of International Management and has extensive knowledge and skills in mobile/gaming app development, emerging technologies, UX/UI design, analytics, Gen AI governance, and product development. He is a frequent guest speaker at mobile summits, where he shares his insights and best practices for improving user adoption and retention by leveraging data and innovation. He is passionate about discovering and creating digital technologies that provide the best experience for people using emerging technologies.
[4] Ready Play One, directed by Steven Spielberg, March 18, 2018 - Warner Bros. Pictures, running time 140mins. https://en.wikipedia.org/wiki/Ready_Player_One_(film)
[3] Her written and directed by Spike Jones, October 12, 2013 - Warner Bros. picture, running time 126 mins https://en.wikipedia.org/wiki/Her_(film)
[2] Spatial Computing by Simon Greenwold – Researcher at MIT 2003 - https://acg.media.mit.edu/people/simong/thesis/SpatialComputing.pdf
[1] Iron Man, directed by Jon Favreau, May 2, 2008 – Paramount Pictures, running time 126 min. https://en.wikipedia.org/wiki/Iron_Man_(2008_film)
Cyber Strategy Consultant | Emerging Tech | XR Advocate |Thought Leader | Entrepreneur | Innovation Advisor @ XRSI | MBA
11 个月I have had quite a few questions about a follow-up paper. One is in the works and should be ready by the end of January. Some of the concepts regarding content introduced in the paper are part of a PoC I am working on with a small team in stealth mode.