I have always been a fan of mixed reality, and over the years I have always tried to push the boundaries with this technology: in 2019 I launched with my team a mixed reality fitness game before MR was cool, I’ve published many experiments about blending the physical with the virtual, and I have advocated for months about giving camera access to developers. I think MR is cool, but still, after all these years, I have to admit that the technology didn’t have the success I hoped for. There are very few XR applications that are truly “mixed reality” and do not use MR passthrough only as their background, and there is no “killer application” of MR. No Mixed Reality application reached the success of Beat Saber or Gorilla Tag. I tried to ask myself why, and to foster a debate in the community, I thought about some reasons why this is happening. I made a list below, with the elements in no particular order.
MR must have a sense
The use of mixed reality must be justified by a need to blend the real and the virtual: the virtual elements in your room must be there for a reason, and must blend in with your room’s elements. If an MR game is just a tabletop game with a passthrough, it could as well be a VR game, because the real world is there just as the background, and there is no real use of MR.
If an MR game is about fighting medieval warriors in your room, I may question why there are medieval warriors in your modern house: the real world, in this case, acts even as something that “ruins the magic” of the XR game, and the game had better happen in a medieval setting in virtual reality.
A football match that happens as a 3D diorama on my table is a cool technical feature, but at first, it is not connected to my real space in any way. Then I would prefer much more to enter the football field, being there with the players in VR, instead of just seeing it as a small 3D representation.
Most of the time, MR is used to give awareness of the surroundings, but it’s not used as true mixed reality, so it’s less appealing. And finding use cases where it is truly useful to use the world around you, and it wouldn’t be better to just teleport the player somewhere else, is very complicated.
MR headsets are unsuitable for most tasks
The mixed reality offered by Quest 3 and Apple Vision Pro is quite good for enjoying a mixed reality experience, but it is not good at all for enjoying your life. The resolution of passthrough AR is lower than the vision capabilities of your eyes, so it is not suitable for certain activities.
Let me give you an example. Some time ago, a quite cute cooking MR application developed during a hackathon got viral in the VR communities. But would you ever trust cutting your potatoes with a very sharp knife if your vision is a bit blurred and has some small lag? I’m not so sure. Another typical idea of MR is enhancing what you see on your TV with some special effects happening around it. But would you ever watch your TV through the passthrough vision of your headset? You risk ruining your experience and see the movie a bit blurred. Not to mention the fact that headsets are uncomfortable to wear for a long time, so only a small number of people would keep a headset on for 2 hours to watch a film this way.
The fact that headsets are still in an early stage also makes them unsuitable for some environments: again, cooking in MR is cool, until the frying pan squirts hot oil on your expensive Vision Pro, and then you decide you’ll spend your life only microwaving precooked frozen dishes. Another popular video of the past was Daniel Beauchamp’s mixed reality gamification of your house chores. But during the summer, would you ever wear a big Quest 3 while mopping the floor? I’m already sweating liters of water when I do this activity without wearing anything; I’m not sure I would do that with an MR headset.
Many use cases will be enabled when the materials and the form factors of headsets are more suitable for that. This is again a problem that will solve itself with time, but it is certainly impacting the number of applications that can be successfully developed today.
Last year, I did some experimentation with mixed reality, and the video I made, which got the most popular, was this one, where I transformed the scenery in front of my window into a low-poly forest.
Choosing the landscape you see from your window is a very cool mixed reality use case, and I’m sure there will be some popular applications doing that in the future. But with the current mixed reality operating systems, you can not run such an experience in parallel with others, meaning that if you want to see these landscape modifications, you should launch a dedicated app for it. But no one is interested in launching an application whose only purpose is modifying what you see out of the window and nothing more… it can be fun for 10 minutes, then that’s it.
Many MR use cases require the MR experience to be run in parallel with others: the landscape modifications would be cool if it could run as a background service, with every application requiring the use of passthrough making you see the modified landscape. That way, it could be a killer application. This is just an example, but other use cases are like that: for instance, recognizing faces, or having a widget about the weather to attach to your window.
I guess the headset manufacturers understood this, and that’s why we are seeing some steps happening in this sense: Apple Vision Pro allows multiple MR applications to be run together, plus it has just introduced MR widgets. Meta teased Augments (which are basically AR widgets) some years ago, and even if the project has been delayed, it is still working on it. I’m personally quite bullish about AR widgets/augments… I believe that one day (in the usual 5-to-10-year period), they will be a huge business. If we all use mixed reality, the first thing we’ll see when we turn on the glasses is the widgets we put in our home, so there will be a lot of eyes on these applications. We’re still in the early stages on this side, but things are improving.
MR headsets have friction
Some very useful MR applications would require you to already have the headset on your face. Let’s take again the example of the weather widget: let’s imagine you have developed a wonderful mixed reality weather widget that gives you the weather forecasts with a lot of immersive, fancy effects that you love to enjoy. When you want to check the weather for tomorrow, would you ever find your headset in your home, put it on your head, wait for it to boot, then launch the weather app? No. It would be much easier to take the phone from your pocket and launch the weather app with a swipe and a tap.
Currently, an MR app to succeed needs to give so much value to convince the user to go through all the pain of putting a headset on his/her head. This hurts MR applications a lot. Think also about virtual pets: virtual pets will start to make truly sense when you wear your headset all the time, and the pet is always there with you, across all the MR productivity apps you may launch.
The problem is going away when people keep XR glasses always on their heads, which is starting to happen with smart glasses for now.
Some MR use cases are for outdoor use
MR headsets are made for indoor usage, but some use cases for AR/MR would need to happen outdoors. We saw some people making some stunts wearing the Vision Pro or the Quest 3 in the streets, but the reality is that going outside with these devices would make people look at us like weirdos. For some use cases that just require understanding of the world around you, the best technology to use today is smartglasses, and MR headsets will converge with them over time.
Responsiveness is a pain for developers
If you are a developer working in MR, you may think about a great MR experience that works very well in your room, but how can you be sure that it will work the same in the houses of other people? Their rooms may be smaller or larger. They may not have a desk, they may not have a window, they may not have a sofa. This means that your mixed reality experience may not succeed because it is fun only with some particular room configurations. Not to mention the fact that not all users want to allow apps to access their room layout, making the jobs of the MR developers even more complicated.
Years ago, David Heaney wrote on Upload an interesting article where he compared the “responsive” design of websites that should work on all devices (phones, tablets, PCs, etc) with the “responsive” design needed by mixed reality apps to adapt and work properly in every room they are run in. The problem is: responsiveness for websites happens with devices that are all rectangular, and there are now many libraries and plugins that accommodate that out of the box. Responsiveness in a physical space is much more unpredictable, there are lots of more variables to take in count (the shape of the room, its dimensions, the objects contained in it, etc…) and currently there are not many tools that help the developers with that (SyncReality was offering an SDK for that, but now the company pivoted).
I guess long-term, the platform holders (e.g., Meta) or the dev engines (e.g., Unity) will offer tools to help with that, but today this is an open problem that limits how developers can think about their MR apps. There are workarounds that can be implemented, though: the developers of Starship Home took a very smart approach in letting the users put themselves the elements of the game in the most suitable positions in their rooms; this way, the system does not calculate itself the “responsiveness”, but it is the user who does the heavy work for it.
Multiplayer is a pain
Meeting with people in mixed reality doesn’t truly make sense all the time. I’ve already investigated this in another article of mine, and I suggest you have a look at it. To summarize the issue: if we meet in virtual reality, we can remotely be in the same virtual space and live an adventure together. This is why games like Fortnite and Roblox became meeting places for kids after school. But if we are in mixed reality, I see you as an avatar in my room, and you see me in your room, which makes no sense at all, because we do not have a shared context for our meeting. There are ways through which you can make MR meetings work well, like, for instance, when you are all seated at a table to play Demeo, or you see your rooms merging in Party Versus, but they are limited use cases.
XR is now flooded with kids playing Horizon Worlds, Gorilla Tag, or Animal Company together with their friends after school, and I can not imagine one of these three games happening in mixed reality. XR is mostly about multiplayer today, and mixed reality is not the best environment for multiplayer. This hurts its potential.
Tools to develop mixed reality applications still need to improve. For instance, the Meta Quest SDK still lets you only use a static scan of the room. Camera access was very limited on all platforms until a few months ago, and still is locked on some of them (e.g., Pico). Many developers complained about Meta Quest drifting its tracking over time, meaning that even if in the beginning you had a perfect match between the real and the virtual world, over time, the virtual elements tend to move away from their initial location. There should already be more tools that connect out of the box mixed reality with artificial intelligence, without the player doing this by hand. There is no way to “remove” the physical elements in the room, which would be useful, for instance, to try new pieces of furniture without seeing the current ones. There is also no way to fully replace your visuals with something else: on Quest, I can only apply some visual filters to the central part of the passthrough, but I can not do that to the periphery of my vision.
These problems are going away over time: things are already getting better, with Meta Quest having the building blocks to develop MR applications in an easier way, and the Niantic Spatial SDK already allowing for real-time meshing of the room around you.
Small-ish market
The number of mixed reality devices is far inferior to that of virtual reality devices. Quest 3 and Quest 3S had decent sales, and the Vision Pro had its little share of fans, too, but they are not comparable at all with the huge success of Quest 2. This is already a problem for the performances of the VR games, with many game studios being forced to lower the graphical quality of their games to fit the more popular Quest 2, and it is even a bigger problem for mixed reality. Quest 2 features some mixed reality, but it can’t offer an experience of good quality.
Making a pure mixed reality application means targeting a small segment of the market, and it is for sure not appealing to most game studios. And without proper investments, it’s hard to make the “killer app of MR”. All the new headsets entering the market are now hybrid MR headsets, so this problem will solve itself, but it requires time.
These are some reasons I was able to come up with. If you have others that come to your mind, please add them in the comments below. I would like to spark a debate about it.
Some of the issues I mentioned will be solved over time, and some others will stay as is because they are inherent in the technology. In my opinion, with MR headsets and AI/AR glasses merging in a single form factor, the tools improving, and the market expanding, we’ll see at a certain point an AR/MR killer app happening. But as everything related to XR, it will need more time than we would like…
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I’ll be very happy because I’ll earn a small commission on your purchase. You can find my boring full disclosure here.
Related