Welcome to episode 53 of Lost in Immersion, a new week in the game. Fabien, can you talk about your topic, please? Hello. Yeah, thanks. So, 53, it's more than a year now. I have to celebrate that somehow. So, I want to talk today about WebXR, and so I tested some WebXR pages on the Apple Vision Pro and on the Quest 3, and so I will give you a bit overview of the results. So, first, maybe a quick introduction about WebXR. So, when we speak about AR and VR, it's mostly on apps, so it will be like an app that you install on the Vision Pro, on your device, on your smartphone, but with the increase in performances on the browsers. So, the XR features are also becoming available since many years already on browsers. So, AR, VR, and all this kind of XR interactions. And now that we have these fantastic headsets, the XR is also available on some of them. So, on the Quest 3, it's already activated, it's enabled, and on the Apple Vision Pro, there is like a debug flag that needs to be enabled. So, it's not like always on feature, but it's pretty easy to do. So, I think it's doable for someone even without technical skills. So, I was really curious to see how the WebXR will behave with the hands and the eye tracking. So, I did a couple of tests. So, this is a sample from 3GS, which is one of the most used WebGL library. And as you can see, it's working very, very nicely out of the box. I can pinch and drag and look at the objects, the hands are outlined. So, yeah, just out of the box, it's working really, really nicely in VR. And so, now there is another library, a more advanced library for AR, which is called 8th Wall, which is one of the most famous AR web library. And they have a few samples. So, what's funny is they have access to the inside camera and the face tracking is working. So, it's actually quite funny to do face effects on the persona video. So, you can see here, and it's working really nicely. The face tracking works really well. So, that's a win. Then I tried some VR examples. And on this one, I couldn't do anything. So, I'm not really worried about this. My guess is just they are just missing some implementation or touch implementation somehow in their samples. I don't think it's a headset problem. I think it's just an implementation problem on their side. But other than that, the experience looks really great, very fluid, no FPS drop or whatever. So, it's working really nice. The hand tracking is working as well. So, in the persona, you have hands. So, it's a bit difficult to actually make it work because the hands are not really visible in the video, as you can see here. But the hand tracking is actually working. On this one as well, same problem. The VR experience was working really nicely, but I wasn't able to interact with the buttons. So, again, I think it's just an implementation problem. Same here. I tried to move it or scale it, but it couldn't work. And finally, Ace World, they also have an ear detection feature. I realize I didn't record properly, but you can see very briefly that it's working as well. So, the detection of the ear. And then, I loaded this sample as well, which is just to see how the performances are behaving in the WebXR browser. And yeah, it's at 50 FPS. There is absolutely no drop in FPS on that. So, overall, VR is working. The AR is working on the inside camera, but AR is not enabled on the outside camera. So, now, if we move to the Quest 3, if I manage to... Yeah. So, AR is actually working on the Quest 3. So, here, I'm playing with the 3GS AR. So, WebXR AR examples. You have a paint example. You can paint in 3. It's actually pretty cool. There is a plane detection feature as well, which is working really nicely. So, it detects the walls and the floor. And you can see right after how it looks at my room. Yes. Oops, sorry. Here, you can see that the walls and the floor of my room is mapped really nicely. So, overall, the experience was better in terms of feature and capabilities on the Quest 3. And of course, the quality and the capabilities, like the hardware capabilities are higher on the Apple Vision Pro. But it's too bad that the outside cameras are not available and the AR is not available. So, yeah. That was my test on WebXR. I'm pretty happy about it. We will now be able to have one experience that you can play on desktop, on mobile, and on headsets, but with the same on the same website. So, that's pretty cool. So, yeah. Do you have any comments and questions? I'll start with you, Seb, as usual. SEBASTIEN DUCROCHET I just wonder how did you feel to only use your hand and your eyes to interact in the web VR experience on the Vision Pro? NICOLAS ROARD So, when it works, it works really well. Like the first example that I've shown, maybe it's because I'm getting used to the interface of using the eyes and the pinch. But I didn't – it was really natural and very easy to interact. SEBASTIEN DUCROCHET And it was only a grab at distance, right? You were looking at something. There was no highlight on the object, but you were able to do this on your side and it was grabbing it. And then moving your hand was allowing you to position it in space. Is that the way it was working in terms of interaction? NICOLAS ROARD Yes. And in addition, you have an array that shows the axis, like a projection. But yeah, it felt really, really natural to just look at something and pinch to grab it. SEBASTIEN DUCROCHET And what about grabbing one of the objects? Was it feasible or not? Like, really trying to put your hand on the object and doing that to grab it? NICOLAS ROARD So, it's not – I think it's possible. This is not how the example is done. It's just using the gaze. But I think it should be possible, yeah, because the position of the hands is known, so it should be possible, yeah. SEBASTIEN DUCROCHET Okay. I'm asking because on the Quest 3, I tested even the latest demo they provided, which is Cryptic Cabinet, which is an escape game that you can do in shared space with someone else and works great. But all the interaction are done with the hand and you have to grab object and position them on table and move them in space to solve the puzzle. And most of the time, the issue comes from there because the accuracy of how you want to grab an object and everyone is not doing the same gesture. So, a lot of the time, the object are grabbed and fell down on the floor or are not grabbed and you have to do the gesture several times. So, that's still something I'm not sure on how to implement it in our project. Is it best to do distance and gesture to grab something or to use this grab and wait for Meta or Vision Pro to make that more reliable? Yeah. SEBASTIEN DUCROCHET But overall, that's very nice that both headsets are making that available. So, like you said, we can develop one app, one web project for all the platforms and that's easier for us to sell that to the company we are working with. Yeah. One thing about the – actually, I will play again the video so you can see. One thing about the Vision Pro is often in the apps that I've tested, there is a lag between the gesture and the object that moves. And I think it's intentional maybe to avoid jitter or something like that. But here, there is absolutely no lag in the way that experience was implemented. So, that felt actually pretty nice to have zero delay when interacting with an object. Yeah. Okay. Thank you. Yeah. First of all, it's very interesting to see that Apple delivered, as you may remember, they announced the WebXR support a few months before the release of the Apple Vision Pro, as well as the partnership with One Metaverse. But we didn't try it yet, I guess. But yeah. So, they succeeded in implementing the WebXR. It's very nice to see that it's working out of the box. As you mentioned, basically, the issue now is implementation and development and integration of these new features. As for the MetaQuest 3, I guess nothing is very surprising about this, that it works because we worked with the previous headsets, especially in VR. I know that I've developed a few apps for this, and it really works smoothly. I guess the next step for this experimentation would be to try the Unity WebXR export, because the VR and AR support can be done through Unity. I don't know if you can support the Apple Vision Pro, but however, it could be an easy way to export small presentation or demonstration through the Apple Vision Pro. Because I always think that the WebXR support is less painful than the Xcode, iOS, and all this bumpy road ahead of you. So, if the performances are great, and as you mentioned, the tracking is great as well, it would be nice to see if the Unity implementation is as good as the 3GS native one. So, this is a new mission for you. But yeah, I guess it also confirmed that WebXR has a bright future ahead. Because for years now, we've been looking at this technology, and I guess this is the best we can do for deployment for our clients and for demonstration as well. But the performances and the tracking were not that good until this day. So, the last thing, because if you can remember, we talked about the metaverse at some point, and one of the key definitions of it was that it was browser-based. And of course, we all thought about WebXR for this part. And I guess it's really taking a new step now. And maybe we are a step closer to this definition of the metaverse. One... Sorry. Yeah, I totally agree on the release lifecycle. And instead of going the way into, you know, creating an app and publishing it, and all the steps that one needs to go through, and approval of either Apple or the Oculus Store for being able to publish something, then, you know, with WebXR, there are less roadblocks. Of course, there is also the quality issue that, you know, someone can release something without really testing it. But anyway, I think it's really good to empower the developers to release on many platforms. Yeah. Although, one thing to consider is that it's harder for the user to find this kind of experience, because they have to go to the browser, type whatever they want. They can't scan a QR code, I guess, right now. At least on the Quest 3, it's not feasible to directly access the webpage. So, it's easy on a smartphone to scan a QR code and be dragged directly to the webpage where the experience runs on the headset. Right now, it's a bit complex. Well, this is where Apple wins, because if I have the same account on the Vision Pro, and the my iPhone or whatever, I can just copy and paste it on the Vision Pro or airdrop it. So, yeah. But yeah, I agree. As of right now, maybe it's a good business idea, I don't know, to have a WebXR store, but there is no store for WebXR experiences yet. Okay. So, anything more about this, about WebXR? No? Okay. So, let's jump to Seb. All right. So, today I want you to talk about the first video shared, at least the first, of a comparison of Spatial Memories Watch on the Quest 3 and a comparison with how it looks on Vision Pro. So, what is interesting is that the person that did the shooting did that at the same place where he's rendering the video. So, the floor looks the same, the wall looks the same. And so, the experience seems really a lot like what we see in minority reports, but yeah, it looks really nice, actually, on both sets to be able to walk around and have different point of view of the video you shot. So, I guess we can see right now the limitation of the spatial memories is that if you are not at the same point of view that the view was taken, you have this shadow that are what happens when you are doing a scan and you don't have all the points. So, what's really the point if you have to be exactly at the same angle? But it's logic. You can't have a 360 capture of something that you are looking at. You don't have this rig around you. So, of course, you can't have this 360 view of what you are looking at. But yeah, I'm not very impressed or what the use case for this, because you are looking at not that great image of a memory. I guess it has to be improved or find something that makes it enjoyable. Maybe the effect that we've just seen with the hollow around is maybe correcting a bit of it, because I guess we didn't see much shadow with this effect than with the other one. And like putting a frame? I'm sure that these are spatial videos. Like, I mean, a spatial video taken either on an iPhone or an Apple Vision Pro, because it doesn't look like that to me. That's what they say in the post. Maybe they are using it and displaying it another way than by default on the Vision Pro. Okay. Because in the Vision Pro, you don't have the ability to see it from the side. So, it's really like, you know. Looking inside a frame? Yeah. I mean, you can put it full screen, but it's just like a wobble on an angle that you can look at. Yeah. It's purely a stereoscopic video and not an immersive one. Okay. So, more to dig in to check on how we did that. All right. But I agree with you, Guillaume. It's nice, but unless you are doing maybe an escape game and you want to see what an actor did before in the room you are in from a certain point of view, I don't see a huge quality here that would make me want to look at that. Yeah. Because having some crappy polygons on the edge and shadow that is real on the background make the memory not look nice. Yeah. So, like you said, I don't see the point of... I guess if this is the way they are wanting to go, they want to go, sorry. I guess they'll be improving it as they did for the Persona, especially for the Apple Vision Pro part. So, we just have to be patient for them to improve each feature that they showcased and I guess they'll be improving those that they are believing in. The other video I wanted to share was this one where there is only food and ingredients on the kitchen table using WebAR with your phone, I guess. Yeah. It's analysed with AI what is the ingredients and find a recipe for you with that ingredients and display you pictures and instruction on what to do with steps. So, here it's more the interface that for me looks quite nice. Now, if it's on smartphone, that's not really usable, but maybe with the Vision Pro or the Quest 3, that's start to be interesting for people that don't know how to cook. And it's really showing how the ingredients should be cut and where they should be placed and et cetera, et cetera, which space you need to use. So, I found the use case interesting. I won't be the one using it, but for other things like mounting a device and having the steps being displayed this way to explain which part needs to be removed before the other part, that can be useful for me on some occasion. I prefer improvising in terms of cooking. Yeah, it's quite interesting that the cooking use case is getting so much traction since the Apple Vision Pro, I guess. But it seems like to be the goal to have ingredients on your table or in your cupboards or whatever where you put your food in. And so that the headsets could be able to know exactly what's in your kitchen and what you can do with it or in your fridge as well. So, very interesting to see that something that has not much, you know, the value added to this is not that high for an AR glasses, I guess. Because you can open your fridge and know exactly what it is. You don't have to have a special equipment to know that it is a tomato or carrots. Unless you are a child. And how to cut carrots, I guess. It's not that complicated. So, it's very interesting to see that a very simple everyday life could be the goal for AR or spatial computing at some point. As you could have way more complicated tasks that are not easy to do. But I guess this is the whole idea of the scenario and not specifically the application that is interesting here. But yeah, I guess this is the goal now. If you want to be famous, you have to be able to create a cooking coach with AR. Yeah, I have two ideas on that. The first one is, I don't know if I mentioned it here, but I had the opportunity to try the Meta Ray-Ban glasses. So, there are no screens. It's, you know, just a microphone and voice. But I was a bit doubtful before and then I tried them. And I have to say that I can start to see some use case that I didn't see before. Maybe a lot of cooking is a good use case. But it's like, oh, I have this. What? Hey, Meta, give me a recipe with this. And then the recipe is given by the voice or when there will be screens, we can see as you showcase here. So, I don't know. You know, you are walking outside. Oh, what is this building? You can ask for a translation. Please translate me this menu. Or this kind of use case that, yeah, I can start to see some sense in there. And yeah, I guess with cooking, the nice thing about it is it's quite easy to apply generative AI on it because that's where generative AI is good at. Like you put a lot of different things and it will speak out something that kind of actually makes sense. And I actually tried once to ask a chat DPT for a recipe. And it was actually pretty good. So, you know, I just did it once. And don't quote me on that. I guess, yeah, it's much more complex for industrial use case where the scenario is very, very precise and you need to be exactly 100% sure. Like you cannot take a picture of maybe a very complex engine and ask chat DPT, how can I replace this? Because maybe chat DPT will not know how to do it. Because cooking, it's so common that the current models know most of the recipes already. So, yeah, I think there is a really difference here. I guess also something that we can foresee here is that how robots will behave when we put ingredients in front of them and ask them to cook. That will be like the step they will go through to make your recipe or a new recipe based on AI. So, that's interesting. The thing that is more useful and more the use case that you are talking about Fab is the use of PTC that released a new version of their tracking system to allow different users to be in the same space and share an experience together around building an object and checking the design and assembling it together. So, it's interesting to see all the companies that were in the augmented reality world that move to Vision Pro to adapt the tools to this new use case. So, here they use the personal. So, two people in the same space can, or three people in the same space actually can share the same experience and really review the design together. So, that for me is really a use case that is really useful here. You can have a designer at this time from the engineer and they can meet together and discuss about the design and the engineering part to make sure everything works. I don't know if you have any thoughts about that. Yeah, it's really a great evolution of what has been done with the HoloLens back in the day. It's not a new use case, but I guess the way it is done is at another level clearly. It really becomes to be usable without all the limitations that we had with the HoloLens. So, yeah, I guess we are taking a road that is a good one for this one. Especially, I guess maybe we are getting used to the persona, but I don't find it shocking right now. It's, yeah, it's improved then, but yeah, it's not that weird sensation we had at the beginning. I guess there is a part of getting used to. And here it's only a user that have shot air. So, yeah, maybe that part of what works here. Yeah, I tried an app. I'm forgetting the name right now, an app where I was able to see other people with persona. And yeah, I have to say it's working really nicely. I just think I'm getting used to it as well. Like there's a bit of, you know, cannibal here. One thing that I found pretty strange with the Apple Vision Pro persona is sometimes it shows me like from an angle. I'm not sure yet how this happens, so I need to dig into it. But it was pretty disturbing. I was seeing my friend as well from an angle and some kind of glitch there. But otherwise, yeah, the presence, the feeling of presence is actually cool. So, yeah, I really like the persona. And the floating persona as you're showcasing here is even better than, you know, just the one in the frame that we had before. And being able to see the end of the user also interacting with the object is quite nice. And talking about quality of personas, maybe it's not there yet, but that's a 3D Gaussian splatting done with a complete rig of cameras, high-end cameras, camera, cameras. And we can see it's not animated. It's not moving. It's just a 3D Gaussian splat. So you can move around and take your own position of cameras to check the model from the angle that you want. But yeah, the results start to be really amazing. So I can't wait for that to be available in real time on the headset. Yeah, because we know that with this kind of input, I don't know if you can remember, the hug paper by Apple, they can rig this 3D Gaussian rendering to make them avatars. So the next step is just very close. But I mean, yeah, the level of detail in the air, that's, yeah, it starts to be really amazing. And that's it for me. If you have any comments on that. Okay, so last topic. So today we are celebrating an anniversary. 10 years ago, Facebook at that time bought Oculus. So they did a specific web page about this. But despite the fact that they are... It's a TK2 release. So to my point, as I can feel it, the TK2 was a great success because it was just after the TK1, which is the base of the whole VR wave and new era. The TK2 had these great features to have the USB port on the front. And it allowed developers to add like the leap motion and other features to that. It was very customable and developers and VR and 2D artists very liked these features. So the TK2 was a success to my knowledge. And after that, I don't know if you can remember, but we had the real official meta slash Facebook Oculus, which was the Oculus Rift. And I think the Rift was not a great headset because they completely lose the room scale battle in favor of HTC Vive. And I don't know if you can remember, but at that time, everybody had the HTC Vive because of the ability to do room scale. And I guess they lost the battle here. After that, they did the Rift S. I don't know if you can remember the Rift S, but I couldn't imagine what it was before I took the picture back. But the headset came in 2018 and it has the inside out tracking feature that has already be implemented in 2016 by Microsoft and it's mixed reality headset. So once again, they were late to release the new feature that was requested at that time. They did the Oculus Go, which was some kind of a success for pros like us. Because I can remember on the trade shows and a professional demonstration, the Oculus Go was used a lot because of its... It was a quest, a kiosk mode for us to use. And it was easy to install and make people try the VR because at that time, it was still the... Our VR was working and make it... The idea was to make people discover it through trade shows. But after that, I guess the next great success was the Quest 1 because they brought the wireless implementation. And so they took back the lead in VR through the Quest and then the Quest 2. And I guess they had some not so great success with the Quest Pro. But it showed them that the mixed reality part was something that people were willing to do. And it's strange to see that through a failure, sort of, they found a new way of doing headsets and it gives us the Quest 3. So the road of meta with this acquisition 10 years ago was not a straight line to success. It was, yeah, it was quite a bumpy and through these 10 years, I guess they had just three successes. So not that much. But something that is very, very interesting is that they released this video here, the 10-year video. And they are showing a clip of 2016 where Mark Zuckerberg said that the most important thing was to create application for people and user to embrace VR. And I guess they completely lost this message as we are always talking about it on this podcast is that the main goal now that we have very performant and efficient headsets at a very affordable price is that we need those killer apps with very interesting daily uses for people to keep their headsets and to be immersed like on a daily basis as they would like it to do. So for the future, we've seen that the smart glasses that you tried Fabien is one of the way they would like to go. But they just announced that their new headsets which should be a competitor to the Apple Vision Pro should be the fusion between the smart glasses and the MetaQuest 3. And it seems like it would be released in May 2025. But we will have some more information during the MetaConnect in a few months. But yeah, should be next year. And we will see what they can do with this merge or this fusion between the two headsets. So what do you think about this? Do you think my interpretation of this past 10 years is correct? Or do you have some things to say about this? I think you're correct. Yeah, that's exactly what is going on. And I saw in Paris and in Barcelona that they are still advertising the Quest Pro in the street. I'm sharing my screen right now if you want to share it. And it's all the ads that are showcasing in the street are more for professional with the Quest Pro. There is no Quest 3 ads. So that's like we said before, the way of communicating is always a bit strange to us. But yeah, it seems like they are trying to push on both way and trying to still push the Quest Pro for professional and Quest 3 for gaming and more casual usage. And I'm not sure I agree with them. I think the Quest 3 is a great use also for pros. So and like you said, it seems like they're moving for more affordable headset instead of trying to find also on the side a better app and use case where people want to wear the headset to do it and play or walk on it. And yeah, that's it for me. What about you, Fab? Yeah, it's amazing to see that, if I'm correct, the HTC Vive also has its birthday like in April, which is funny. Which was like the biggest, which still is the biggest competitor to Meta. And yeah, I think you said it all. One thing I can say on my side is the Quest 3 is really, really a great device. So I've been handing it to people when they tried the Vision Pro and I tried the Quest 3 and they immediately see the difference in the capabilities of both. And yeah, like for the price, for the weight, for the performances, for the field of view, I think the Quest 3 is really their best achievement so far. So, and as I said before, I'm really curious to see where this smart glasses trend is going as well to see if there are actually, as you were mentioning, like real daily killer apps usage except cooking tomatoes and sausages. I was going to say about the Quest 3, they are just releasing, I think today, the V64, which from what they announced, it seems to increase the quality of the pass-through video. So interesting to check out and to discuss about that next week. And also I'm going to Laval, so an event in France where everything is about Excel experiences. So I hope I will be able to check this Ray-Ban glasses and also AI tool to check out how the interaction works. And if like you Fab, I would be keen to use it because like you right now, I'm pretty sure that's not the way I would like to interact with the device. But we'll see if maybe I will have a break, break and change while trying it. You're muted, I think. Yeah. So I guess we don't have much more to add for today. We did a long podcast today. So see you guys next week for another episode and have a good one. See you guys. Have a nice week. Bye.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}