Welcome to episode 52 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. So guys, I hope you didn't eat too much chocolate. And let's start with you, Fabien. Hello, thanks. So today, I want to talk about a piece of news about the Quest 3 that surfaced during the MWC. So someone from Meta during conference said that the Quest 3 has a higher retention than the Quest 2 on most of users. Unfortunately, they are not giving numbers. So we just have to trust them on that fact. I think it was an interesting data for us to discuss. And what I would like to discuss today is why would that be? What really makes the difference on the Quest 3 that have user release day more in the headset? Because we all know that one of the issues with the Quest 2 was people would buy it. But maybe, you know, just use once or twice and it will just gather dust on the shelf. So I have a few ideas and I have to say that for myself. I actually use it more. I have to confess for gaming, but also for work and projects. I think the first fact is the quality. So of course, it's not the quality of the Vision Pro, but still like the field of view of the lenses and the overall quality of the graphics really are making a difference. And so of course, it's, you know, less latency, more FPS or less risk of being sick and so on. And then, of course, the quality of the pass through. It's really comfortable to use it like I can wear it and take my phone and, you know, open the door or move a chair or whatever. So that's really a big difference as well. So I think, yeah, that would be the two major differences because I'm curious to know what you think. But to my understanding, I don't think there was a lot of new apps that were released specifically for the Quest 3 that would explain that change in retention. Like, I don't know, you know, the famous killer app that we are waiting for. So, yeah. That's that's that would be my explanation for this. I'm curious to know what you think. And we start with you, Seb. Yes, I think I agree with you. The comparison with the Quest 2. The Quest 2 was already nice to have, easy to use, but you were directly in a VR environment when you were wearing it. So alone in your space and the quality was not that great. The power that was like a Game Boy of the VR world. And now with the Quest 3, even though the game are the same, they are a bit higher in terms of quality, in terms of performances. Everything inside is more fluid. And like you said, the screen are really nice and without final effect. It's way more easier to share it with someone else and showcase what you are doing. It takes two seconds to adjust the IPD with the wheel. So all of that and the fact that when you wear it, the default parameter is to be in pass-through. So you see the screen and everything in mixed reality. Make it much more nicer. And like you said, when you exit the game, you are directly back in your environment. So you can still do something without removing the headset. And before with the Quest 2, you were always removing it all the time. I think that makes it a nicer device to wear, so it gets more retention. I guess that's what is going on here. Yeah. Okay, cool. And Guillaume, what do you think? Yeah, I'm not sure about the graphics. I'm not sure people can feel the difference between the 2 and the 3 for casual user. I'm not sure. Of course, I would say it's better because it is, but the perception of it. I don't think this is what makes people come back to the device. To me, the major improvement is about the mixed reality, especially the color pass-through. It's really interesting to see that this feature was made for the Quest Pro. The Quest Pro, they were not that much into it. It was more an improvement of the Quest Guardian at the time. And regarding the will of the users to take this feature and make it something great with the augmented reality. They improved this feature and put it in the MetaQuest 3. I think this is a key for most users to have something, as you said, maybe less immersive, but a more hybrid approach between the augmented and the virtual reality part. Maybe also all the buzz or the fuzz about the Apple Vision Pro and the spatial computing is keeping people using these headsets in mixed reality to see what's coming in the upcoming months. Because for the most part, we don't have the budget to buy an Apple Vision Pro right now. So maybe people are keeping their headsets to see what's coming and what kind of application can be done with the spatial computing. Because we saw that Meta is willing to create more content in that way, just in a competition state of mind against Apple Vision Pro. So this is, I think, one of the main reasons why people are keeping the headset right now. And still, it's very early because the headset was released last year. It's less than nine months, I guess, or nine months each right now. So we should wait a bit longer to really stamp the fact that people are keeping the headset because the price tag is higher as well. So maybe people are not willing to sell it right away and lose more money because the depreciation of the price is very fast. So yeah, this is my take on this subject. Okay. What do you guys think about PC VR? Because it seems like a lot of users are using Steam VR and software like that with these devices. Do you think it's also driving the usage and the retention up? I know the gamer will want the best quality of graphics. And already, most of them are playing on PC, on Steam VR, or on consoles. Unfortunately, the PS5 VR headset is not working that well in terms of sales. And Sony is not pushing that much in that direction anymore. So apparently, even the number of games that will be produced with VR for the Sony platform won't be that much in the future. So right now, if you want to play high-end games, that's the best way to go. So I guess that's why people are going this way. And the Quest 3 is really great for that. The way they share Wi-Fi on Wi-Fi 6 allows you to play directly on your PC without cable. So that's really comfortable. Yeah, we reached a point where we don't have to use a cable anymore. And we have great performances, I guess, with the PC VR. So this is what we've been waiting for for so long. If you can remember the HTC Vive with this Wi-Fi plug-in and so on, to be able to have the power of the PC inside a cordless VR headset. So yeah, of course, this kind of usage is the best one for VR. And now that is working, not perfectly, but much, much better. We can see that we are at the crossroad of a lot of technologies that we've been waiting for. The video path through, the wireless computation. And all those parts that we've been waiting for are finally here. So of course, it brings something to the table. And of course, the experience is better for users as well. And yeah, it's obvious that people should keep this headset much longer because the technology is here now. Okay, yeah. And I, not to switch on two different topics, but I saw that some people have been able to make this link work for the Apple Vision Pro. I forgot the name, maybe you remember the name of the software that's used for that. But anyway, so I've seen some recordings and screenshots of games playing like this from a PC to the Vision Pro. So yeah, that's, I'm not sure that the difference is really worth the price, but anyway. Yeah, I saw some Cyberpunk streamed to the Apple Vision Pro. So as the graphics are better, maybe it has a meaning that the field of view is narrower, so I don't know. If you want more immersion, of course, the Quest 3 would be the choice right now. If you want more quality, it would be the Apple Vision Pro. And one thing about that, I'm really waiting for the next generation to have Wi-Fi 7. So we can do also mixed reality experiences where we stream the video to the computer and do the augmentation there and send it back to the headset. Right now, it's not feasible with the Wi-Fi 6. Okay, so yeah, I think that's it. Seb, what's your take today? So I have two. The first one was about the Quest also. I tried this weekend because I have two Quest 3 at home right now. I tried the Cryptid cabinet demo that they share. And yeah, I was able to play with my wife to this game. And that's really, really nice to be able to... Well, the setup of the room was not that easy. So I think there's still some improvement on this side to simplify completely the way you scan the room and the way you join a game between a headset. But yeah, when it's done correctly, you really share the same space and the same environment. And you're able to play together and to share objects with other players. The end tracking, you need to be in a quite lit environment. So we had to turn on a couple of lights in our living room to make it work correctly. But after that, yeah, it was working perfectly. And the experience itself that they developed is quite nice. They have a random generation of where the object will be positioned in your space. So it estimates a bit the space around you. Same, this could be improved. A couple of things were not reachable with the controller. So I had to customize a bit the room and share that with the other headset. But yeah, overall, that's a lot of progress in the mix in the share space world. Well, before that, you had to do everything manually and to adjust your own room position on the two headsets to be sure to be in the same location. So that's great that it's improving. Now, the only issue with that is that you need an internet connection because it needs to be shared online. If they can make that local, that would be much better. So yeah, what do you think about that? Yeah, it's a very interesting demo. So it's like an escape game scenario. Yes, yes. So there is a bit of clues on the poster that are on your wall. You have to read it and find a couple of clues on how to solve the puzzle. And then there is steps, different steps, different things to do. You don't know which order you have to do them. So sometimes it takes a bit of time to find out what you have to do. But yeah, it's basically being in a room, in an escape game room with interaction that are a bit more magical than what you would be able to do in an escape game. So yeah. Are the graphics, because it seems to be very low rendering for some objects, especially the shadow casting. We can see that it's a very simple one. Is this a problem when you are doing the experience or you can go past that once you're immersed? Well, because even your environment, the video is in low quality. Having the media in that kind of quality is kind of okay. No, it kind of blends correctly. And also I know for a fact that because I developed a couple of experiences in mixed reality. The performances for having the video displayed in real time in the headset and the process that they are doing to make that work. The collision with the wall, the tracking of your environment, etc. Takes a lot of performances on the Qualcomm processor and you are not able to display high end models this way. That's why I was mentioning that having the Wi-Fi 7 on this kind of headset could be really great. Because we could only focus on doing the tracking and the mixed reality, the pass-through video rendering. And having the rendering or the rest of the rendering done on the PC to really get great 3D model. Do you have occlusion as well? Yes. So you need to be careful where to place your object because otherwise you won't see it. So you tried with the Quest Pro and the Quest 3? Only the Quest 3. I have two Quest 3. I don't think on the Quest Pro it will... Oh yeah, it should wonder because you are using right now the Quest Pro. Yeah, they are wearing the Pro this way. I only tried it with the Quest 3 and sometimes also the interaction with the hands are a bit tricky still. Even though they improve a lot compared to the first iteration that they delivered. There is still some misdetection that are quite frustrating. Like there is a couple of interrupters that you have to touch with your finger. And sometimes you have to do it like 10 times to get it to work. But overall the experience is very nice and being able to share it and say, oh look, something is happening over there and go grab this thing and put it here. That's a really nice experience. For business use case, I'm sure there is a lot of things to do in that area. I think with what we discussed last week, the AI assisted mapping of the space, hopefully the placement of the object will be better when this gets implemented into the headsets. Yeah, I agree. Anything else on that subject? No, we can go to the next one. So the next one is the first test of the Neuralink's brain chips that they implemented in a patient that is paralyzed. So his feedback is that he is amazed that he can play a chess game just looking at the screen and thinking of what he wants to do. And having the cursor relaxed to what he is thinking. Yeah, he can play Mario Kart as well. I don't know if you share that. I don't think so. Right now, I think it's only tested that. You saw that? Yeah, he can play Mario Kart with his father. They showcased a video where you can see the video. So yeah, I saw the video. It's a real one. So the next one is the first test of the Neuralink's brain chips that they implemented in a patient that is paralyzed. So his feedback is that he is amazed that he can play Mario Kart as well. I don't know if you share that. Yeah, I saw the video. It's a real one. So is it the next move for the VR headset to have this kind of functionality? Mixed reality headset? Even interact even faster with it? I guess that's what's interesting. Yeah, definitely. I think we talked about that a couple of months ago. That Apple has released a patent to include like captors in airpods that will capture some brainwave. So from the outside. Yeah, I think that's definitely a path that they are exploring. I'm really curious to know what kind of data they are getting from the chip. Because it seems to be very small. They don't say there isn't very much. I don't know. The wire seems to be, as you can see, as they showcase, there are like tens or a dozen. A dozen wires that should be linked into the brain. So very interesting to see what kind of data they are gathering and how they are exploding them. For you to have this kind of information. But if we reflect, it's simple information. Because it's X, Y and action at some point. So like a controller, you don't have much to get for you to be able to move pieces on a chessboard or play Mario Kart. You just have forward, backward, left, right and two or three actions. So I guess people are amazed by what we can do. But on the paper, it's not that complicated because it's simple interaction right now. I don't know what you think about this. Yeah, it's a great improvement, I guess. I'm not sure that this kind of work hadn't already been done in the past. Maybe on humans, but on animals. We know that they implanted chips before. And they were able to interact on the screen, like move a cursor and do some basic interaction. So I guess this is not that disruptive right now. Because they are showcasing that it works as well on chimpanzees. Chimpanzees as well. We knew it would be compatible, but the proof of concept is right now done on humans. I guess this is the biggest step forward. But on the interaction part, I'm not sure it's that big a deal. What we can do with brain chips. I think that's a first step. I think Elon Musk's goals are indeed much higher than that. I think he was talking about, of course, helping people with disabilities. Disabilities like being able to speak. Being able to, for these kind of people, interact with the screen much easier than just with the eyes. And even more, I think curing illness that are curable through brain interaction. The end goal is much more ambitious than just playing Mario Kart. One thing, it's my cautious and maybe pessimistic voice that is talking now. Is that what happens if the company closes or they decide to stop producing the chip. I hope that, as we are seeing right now with AI and some AI regulations, that the initiatives come also from companies and governments. I hope that there will be regulations as well for this kind of devices. So people don't have some kind of guarantees or something behind just getting an imprint from a completely processed without so much regulation behind. Teb, any take on this? You're muted. Yes, I was saying, my question would be how they train that. Because for each brain, everything will be different, I guess. So is there a huge calibration where the guy spends a day saying, oh, I'm looking here, so put the mouse here. And they record the brainwave when he was thinking that way. And they did it for the different part of the screen or the different action like in Mario Kart. Or is it automated and that would be huge if they find a way to have something that calibrates automatically and directly learn. No, right now, they are training the model based on what the user is doing because this is the first person to get this. Of course, as I said, they are maybe discovering as well the kind of data they can gather and how to exploit them. So no, when they are talking about their work, they are telling us that it's a very long way of training to get something working and they are still a lot of improvement. So no, it's very manual right now. But I guess their goal, as Fabien mentioned, is to have a generic model that could be plugged into any kind of person and be working instantly. Of course, this would be the best use case. And I also wonder how reliable in time it will be because your brain is always reworking itself. So yeah, I wonder how it works with time if it needs to be updated regularly or depending on your emotion. Or what is going on around you, is it still reliable? Or do you need to be focused only and alone playing to the game or to the action that you want to do? And the hardware itself, I guess at some point you will have to change it because we know that implants in our body are not that long term. I can't think of any kind of implants that you are keeping your whole life. Even the hip implants or you have to change them at some point. So I guess you'll have to redo the brain surgery. And how many times can you do that on a single person? It's a very interesting number to have as well. What I really want to know is can I learn Kung Fu? Without getting hurt? Without getting hurt? Okay, anything more to add Seb? No. So last topic. Maybe a little here. So on my part I would like to discuss the Unity XR Interaction Toolkit. It just released the third version of it. So Unity is making a lot of communication about this. Basically, what Unity is making is that it uses all the images right? The entire leverage right? For example, you can order images from Google or from Play Store or from Ui. So the debate I would like to have with you guys is, first of all, do you use this kind of toolkits? And next, do you think that these new add-ons are really making sense? Let me explain. I'm using the XR Interaction Toolkit quite a lot with my students especially for us to be able to do proof-of-concept and small work with them very easily and very fast. And when I'm using it, I guess I'm using like maybe 25% of it because the main features are always the same. And sometimes all the interactions they add just make some noise for users because you are like trying to find the right interaction and it's not the best one you want because it's too complicated. And I like when the simple things are the best ones. So I'm wondering, maybe it's too evolved and I'm not sure that developers are using that much of those high-end interaction features that they are developing. For me, at the point we are right now, all they did is sufficient now for us to develop very, very complex interaction and training scenario as well. So what do you think about this? Do you think it's now too complicated or are they trying to make things reflect a way of interactions that are not making that much sense now? Because for most of these use cases, we could have done this based on the basic ones. And just for my grumpy part, they change a lot of namespace. And if you have a previous version of it and you want to use the new one, you have to change your code once again. Thank you, Unity, for that. And the namespace, in some cases, are not making any sense. This was just my grumpy part about this. So what do you think? Okay. So on my side, I'm using a lot of MRTK for different projects. We use that for HoloLens and the way they did the interaction with the panel are quite nice. The way it follows you automatically and be at a certain distance, be around your controller. There is a lot of functionality that are around your hand. Like it follows only the palm of your hand when you put your hand in front of you. This kind of behavior are quite nice and they are working also on the Quest. But like you, a lot of functionality I use only for specific cases and not for all the projects. But at least providing a list of functionality like that. To prototype quickly something in your project and see if it's working. I think it helps a lot the community to try things and make it work. And yeah, of course, there is always one of the functionality that you don't like and you have to readjust it and make your own. But at least you have a basis to work with it and make fast iteration test using one of the sample. And if it's not working, switch into another one. But like you said, the big issue with that is that it's not working. But like you said, the big issue with that is that every time there is an update, a new version, they tend to change the name. It was the same with Emertica and all these kind of assets. They try to make things better and there is a new set of developers working on the project and that try different naming, different things. So yeah, it's either you take one year to make your own interactive assets or you use this kind of asset. But you have to be aware that you will need to rework it on each project almost. I think it's nice, as you were saying, for students, for prototyping, for quickly setting up something. One question is, if I'm correct, recently on the Quest, the dual interaction system is available so you can have one controller on one hand. Is this supporting hands as well? I hope it should. On the showcase, yeah, it should. But no, I think not. Regarding the different videos we showcased, it's all about the controllers. But I may be wrong and I have to check that. Okay, that was my part for this. I guess it is great for prototyping, but at some point you will have to modify it anyway. Well, that's it. Thank you. This is a trap we don't want to be in. But yeah, I didn't check the version. I hope that they didn't make it simpler for you. But yeah, I'll leave it to you. Thank you for watching. I'll see you soon. Bye. Thank you. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}