Welcome to episode 62 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello, guys. Hello. Once again, we are all together to share our immersive views. Hello. Fabien. Hello. Yeah. So, today, I want to talk about software updates that came to the MetaQuest 3 over the past few weeks. We already talked about one of them, but it was not yet on my device, so I didn't have the opportunity to test it, which is the automatic scan of a space and automatic detection of windows, doors, tables, and, well, objects. So I had the opportunity to test that today. And the other one is quite a big software upgrade on the mixed reality. So the quality of the video that we see inside has quite greatly improved, in my opinion. So I'm curious to know what your experience was. Seb, I think you tested it as well. So as you can see on the video here, so first, a disclaimer. It looks better on the video than it looks in real, but still, the upgrade, especially in very good light conditions, is very, very visible. The distortions are less visible. There are still distortions, of course, much more than on the Vision Pro, but it's much more comfortable to use, especially around the hands. As you can see here, you can see some distortion over the edges, but overall, yeah, pretty impressed by this update. I've seen the same behavior of light adjustments than on the Vision Pro. So actually, I don't remember if it was present before, but you can see here, if I look at the very bright window and switching from like looking at more dark space, moving back to a bright space, you see this luminosity adjustments that is done on the video. So it's not really visible in the recording, but you see it's very fast, but it's quite visible in the headset. So that's the first one. The next one is the automatic calibration. So we've seen that already many times, you know, just start the process and you look around and it will scan the space. So this is pretty normal. It looks the same as before. And then when you are done, I'm spinning up the video. OK, so when you are done. As you can see here, so it automatically detects the walls, the tables, the screens, it's not perfect. The table, my screen was pretty well detected, the walls were quite good, did a bit of mistakes here on the storage, which is bigger than it actually is. So I didn't find a way to adjust it, I found a way to adjust the walls, you see these the corners can just grab the corner and move them, but I didn't find a way to adjust the size of the existing objects. And I had also some trouble, as you can see me having trouble trying to add a door. Sorry. Yeah, here. So you see, I wasn't able to add a door, like a 2D, quote unquote, 2D object, so like a flat object. It's only like volume that you can add. So, for example, like if you want to add another table or bed or something that works very well, but adding a door, I wasn't able to do it. Maybe just like the feature is not available, I'm not sure. OK, so that's it, it's it's quite nice to see that on the same hardware, just by doing software updates, we can get like very good upgrades on quality. So I was pretty happy about that. And. And, yeah, so, Seb, what do you think? I think maybe you tried it as well. No, I saw some video comparison between the V66 and the V65 update of the DOS software of the Quest and the comparison between the two videos were already impressive. And yeah, I don't remember if, like you said, the auto adjustment of the lighting was already present before, but I don't think so. In dark environment, it was quite awful. And the video I saw, the comparison between the previous version and this update seems to really improve a lot in dark conditions, also the lighting. So that's great. I would now that you have the two, are you able to compare the quality of the two files with the Vision Pro? Yeah, sure, I can do that for next week or sometime, yeah. But as a first feeling, what is your. What do you think? The distortions are still present on the Quest, much more than on the on the Vision Pro. So that that only only that, sorry, gives really a quality difference. Now on the quality of the video, I need to I think the Vision Pro is better, but the gap is a little bit lower, yeah. And I know some people were complaining that when you were moving with the Vision Pro, it was like making everything blurry. Do you encounter that also with the Quest 3 when you move around? That's a good question, I didn't see that, I will try again, but I think it's yeah, I confirm that it's very visible on the Vision Pro, the blurriness of the video when you are doing very high speed head movements, yeah. And if you walk at walking speeds at that file, that only bother you when you move your head around too fast? Yeah, I wasn't so I didn't do a lot of walking with the Vision Pro, but I like moving from room to room is pretty natural. And it's better also if that grade on the Quest, even if, as I mentioned, there are still distortions, still, the improvement is really comfortable. And yeah, regarding the segmentation, that was really missing because before it was only detecting like tables and you still had to adjust it. But here with the sample that you are showing, it seems to be a big improvement on that. And I guess they can still improve that with other models to track everything. One thing I'm wondering is how big the scanning area can be, is it still limited to the three by three meter or you seem to be blocked and you can't go to the to your hallway? So actually, that's that's quite interesting, I can go quite I can go in my whole space, but the segmentation and the detection of the walls, as you see, is only in one room, like the room I started. So that's something I need to to dig deeper. Seems like maybe I will have to do one scan per room. I'm not sure. That's a good point. I will I will try again with all the rooms at once to see how it holds. Yeah. OK, but other than that, like you've said, it's awesome that they are improving the software and getting closer to other devices with the upgrade. OK, Guillaume. Yeah, I guess I will kind of repeat myself about these updates. The big question is why didn't they do that at the launch of the MetaQuest 3? It seems quite obvious that we don't want distortion and the best video quality possible. So what's the difference between then and now? You know, I already made some hypothesis, meaning that they admit that they are gathering information from the users. So maybe they are improving their model or software correction through times because of the number of users that are sending their data somewhere, because I don't know why they would have better results now than what they had then or maybe the effort. And once again, when you are launching a product, you want it to be the best possible at launch time. So it's very weird that those updates are coming very, very fast. It's like every month now. And each time it's much, much better. So why is it maybe they are just trying to be fast and they are doing what they can to improve and they are doing little by little. But yeah, it's very weird that you can see this whole improvement in just a few months from something that was really not good with the very large distortion. And now it's something that it's much more enjoyable. And yeah, it's very interesting to see that. And about the features of recognition, I know that lots of users are trying to find an application that is using that. And I don't have any knowledge of an application that is taking profit of this feature. You would say it's quite recent, but you know that some studio can have the updates in advance. But despite the fact that it's very, very cool, that you can recognize your space, I'm very curious to know what developers will do with this. I'm meaning useful application because, of course, you can have games by recognizing the space and having some physics added to your objects. But we know that it is mostly already done through the mesh recognition, mesh generation. So what is the ad value of having this recognition? I don't know if you have some example of application about this. Yeah, I think one of the usage they advertise is automatically switching the feature of the headset, like the settings of the headset, depending on where you are. Like if you are sitting down in front of your desk, for example, then it will switch to stationary boundary automatically. This kind of intelligent setting adjustments so you don't have to do it yourself. I can see this kind of feature. Yeah. And about the speed of updates and why they didn't do that first at launch. That's a very good question. The only answer that I have is one of the Facebook core, sorry, meta core value is move fast. So I don't know. Maybe they stick to their core values. I'm not sure. And I think also part of your questioning about why they release that through time, I think it's the way they use AI and train models through time and upgrade them through time. So that's an ongoing process and we see a lot of things going on in the AI. So I think they are profiting for all the advancement from other company too and from their own Lama and some detector systems. So yeah, they are still improving that and apply that to the headset. So. We are OK that if you want to improve your AI model, you need data set of correction. So at some point you can ask yourself, are they gathering information through the MetaQuest 3 to get this improvement? But they already kind of admit that, but we don't have much information. But I guess this kind of work and improvement just validate the fact that they are gathering information through our headsets to improve the AI model behind that. I guess. I can't see how they could gather that much information just by themselves. To have this kind of improvement in just a few weeks or months, you just need thousands of new datasets. So yeah. If someone from Meta can ask us, can answer our question, that would be great. OK, cool. That's it, I think. Right. So today on my side, I want you to talk about the two new 3D cameras that went out, one from Canon and one from Blackmagic. The two are using only one single sensor and have two cameras. So you shoot on one device, one 90 degrees stereoscopic video that then you can see in 3D in the Vision Pro or in the Quest 3. So they are less expensive than what was available before, but still quite an investment. But yeah, they are advertising that mostly the Blackmagic one seems to be the perfect fit for the Vision Pro. That's how they advertise it. And they both look quite futuristic. So this one can be mounted on drones. And yeah, they say that that's perfect for Vision Pro, although they did not release any sample yet. So still need to be seen on the Vision Pro to see what kind of result it allows to get. But yeah, it's a nice improvement and it's 8K, so thinking of that, I start to think about using the feed with AI and enhance the content using AI with this kind of video feed where you can get really good depth map. With these two camera point of view. And yeah, that's nice to see that this part of the market is moving and not staying with the old system where it was complex for the montage afterwards, for adjusting the color of the two cameras and getting a nice result. Yeah, it's really packaged and you get a nice result directly out of the camera. So I don't know if you saw that, if you have any comments on that video. Yeah, it's very interesting to see that big manufacturers are finally getting into the VR slash 360 or 180 video because it's 190 degree per eyes. So obviously, the Blackmagic is for professional, as the Canon one could be more for a medium public, I guess. We could buy it, I guess, if we were very into view and stuff like that. But the Blackmagic is like a RED cameras and so on. So it's really for big productions. So I'm very curious to see if we could have like a rebirth of 180 cinema or movies at some point. But we all know deep inside that the best, the bigger market for these are for the adult industry, I guess, because I don't know how they are filming right now. I guess they are just taking GoPros all together and having the results that they could have. It's still some DIY, I guess, at this point, especially now, since all the previous technologies that were done, it's now 10 years ago when the 360 market was booming and you had the 3D glasses and so on. So now, I guess, there is no market for, well, there is no product to do professional 190 capture. And I guess, yeah, there is a market in this field. But on the personal, you know, like you and me, what would be the advantage of buying this kind of products? I'm kind of, yeah, I don't really know what we could do with that. Otherwise, they are just trying to push stereoscopic memories in the Apple Vision Pro. So I'm very curious to know or to see what will be done with those products, especially for the professional one. And, yeah, just time we will tell what they will do with this awesome, great high-end product. Yeah. Fabien? Yeah, it's very interesting indeed to see, like, for example, two of the biggest apps on the Vision Pro. So What If that we discussed a couple of weeks ago is 3D and AmazeVR, which are like video clips. It's also 3D. It's like rendered in, I don't know, Unreal or Unity or I don't know what they are using. And there are a lot of people saying that, you know, movies made for the Vision Pro stereoscopic movies are coming to Apple TV. I would be really curious to watch one to see how, like, is it really worth it? And maybe this type of camera, especially the Blackmagic one, will help speed up the content production for this kind of movies. Yeah. Yeah, I guess it opens up a new way of doing storytelling and get still a nice output because right now it was a bit crappy. But, yeah, you need to rethink completely the way you will shoot your movie and the way it will be experienced. So it makes sense for the user to look around and focus on the real part of the movie that you want them to focus on. So, yeah, I guess there is a lot of progress into that. And, yeah, I would be happy to see what kind of movie they are or what is the other way of shooting the movie and displaying that to the user they will do. I guess for animal documentary, for example, that would be amazing to be in the environment and focus on the animals you are looking at. That may be their target, but having more content, I think the Vision Pro is pushing the industry to have more content on their device, so maybe they are investing also into content creation. And without this kind of device, it was not maybe feasible before to get the same kind of quality. So that's it. That was the first subject. Then two new papers just came out. This one, which was trained on Quest 3 data that allows to track real objects in your environment and get a 3D location in real time. So it's open up much more mixed reality experiences with that kind of thing. So you can have a feedback and really interact with the object that are in your environment. So I don't know if you want to comment on this one first. Yeah, I'm just looking at the video and trying to think about the use cases, but I guess it's more. Yeah, I was thinking VR first, but yeah, you can add some effects when you are using the real object as well in mixed reality or augmented reality. So, yeah, I'm very curious to know how they train their data. It's just, do you have to train your model every time you want to add a new object or are they based on shapes and learning specifically what kind of object you are grabbing? So I guess we don't have to, I don't have to study more about this or unless you have the answer to this question. No, I don't have the answer. I don't know if they train specifically those model, but in the paper, if I scroll down, I see only those objects. Okay, so they need to train specifically for each object, which limits the use cases of this, because if you don't have the exact object they use, it won't work. But yeah, we'll just, we'll see if they open our ability to train new objects. You know, you can open the training model to see how it would work. But yeah, very interesting to see that what you could do with this. Once again, it's just a technical demonstration of what they achieved. But yeah, is it open source? Do you have a GitHub? There is a GitHub, yeah, I will share it with you. Yeah. So to give a bit of detail, this is, it's a data set. So they provide the data. Okay, I will look into it. And then another paper came out showing some really impressive segmentation to be able to extract from a NERF directly an object. And so it allows them to really extract objects from a NERF. We talked about that before, but now it seems to be really feasible. So they showcase how they segment different kind of scene here. And they showcase how it can be used to train objects. And they showcase how it can be used actually inside the NERF Studio tool, where they select directly an object in the NERF and segment it. And they're able to move it around or extract it directly from the scan they've done of the environment. So very impressive tool that start to be able with NERF. So it's not Gaussian splatting, it's really NERF. So that seems, yeah, it seems to be impressive. I don't know your thoughts about that, Guillaume. Yeah, it's great. Meaning that it's just NERF. It opens doors for Gaussian splatting as well. Maybe in a few weeks, we'll get the Gaussian splatting version of this. But yeah, all the techniques and algorithm that can bring knowledge to the 3D world or 3D representation, it's great. And the demonstration in NERF Studio is showcasing that this platform, NERF Studio especially, is getting more and more traction. They are supporting NERF, Gaussian splatting, and all these editing tools. And it's very impressive to see that the team behind that is working very, very hard and bringing up lots of updates for this open source project. So some thoughts to the NERF Studio team. Great work. Okay. Yeah. Well, I don't have anything special to add. It's just, again, it seems like every week or every two weeks, we have some updates to share on NERF or Gaussian splatting. Yeah, it's going very fast. I was looking at our podcast like one year ago and the difference of quality, of variations, and everything that's possible in one year. It's just very amazing. Yeah, I agree. It's moving very fast. On all sides of AI tool and NERF tool and Gaussian splatting. A lot to mix also together now that it starts to be available and getting nice results. So yeah, impressive time. I think that's it for me. Yeah, Guillaume, if you want to move on your subject. Yeah, sure. So we'll talk about the Logitech MX ink or MR ink for someone, some journalists are talking about this mixed reality pen. But yeah, they call it MX ink. So what is it about? So they already have. So. Okay. Of productivity. And behind that. Okay. It's just 129. 29. Here. It really needs to have the best precision for it to be very usable in the VR. Because you know what, when you're doing the mixed reality thing in the Quest 3, your precision is not as good as when you are doing it in the real world because it's through the video. So I'm very curious to see what the artists or designers can do with this. And if it's really useful or not. Is this just a gadget or is it a tool that can be used on a daily basis? Because the 2D part obviously is the same as what you would be doing in the real world with a graphic tablet or whatever. But to my part, I get the 3D drawing. Especially, I don't know if you can see that, they are overlaying a real object here. And then I realized that why doing this when you have very efficient 3D scanning tools. So I'm not as ecstatic as I was when I saw the first time this pen. I'm more curious to see what people can do with this. And I'm not very sure that there is a use case behind that. So what are your thoughts about this? Well, I think it's... All the artists that I know... I think it's the name. And to my knowledge, it's becoming... It became like a standard in all the artists' work that are drawing on iPads with the pencil. So I wonder if this kind of usage can translate to Mixed Reality or VR. In the video, we see quite a lot of artists using it as well. I don't know. I'll be quite curious to test it to see how accurate it is, as you mentioned. Just once again, you are talking about the artists you know. I guess there are some tattoo artists in this. But just project yourself. You can have someone... They would be able to draw directly on the people's body with this pen. Get the data and then bring it back to their 2D tablets to do the drawing. That would be a great use case here. That's a good idea. They would have the perfect fitment and dimension as well. Just a side note. Because I forgot to talk about it when I talked about the Quest 3. I don't know if it was there. On some keyboards on mobile, you can slide between letters to form words. You can actually do this with the controllers on the Quest now. I don't know if it was there before or if it's a new feature. It's very cool to do that, to write words. Because writing on the Vision Pro is a nightmare. It's very slow. Swiping movements with the controller on the Quest 3 was very cool. It would be even better with this pencil. Being able to write very fast in VR or MR. I think it's very interesting to see this kind of product arriving. I tried one a long time ago with the HoloLens 2. I think it was HoloVision or something like that. I can't remember the name exactly. It was the same idea of having something more precise to draw. But also to position objects in the space and to measure stuff. I see a lot of people saying that the Vision Pro is not precise. When you try to do measurements with it. But with the Quest 3, it seems very precise. The only lack of precision is that you need to use the bottom of the controller to tap your corner. Sometimes you are not precise with that. I guess with this kind of device, that can help also. Even drawing the wall more precisely and objects in your environment. To get a better occlusion in your mixed reality experiences. And also for 3D artists. I think having the ability to change the mesh, change the normal map. Draw directly the normal map on the mesh. This kind of tool seems to be an announcement for graphists. If Substance develops something with that. I know they have a great tool on PC. But if they create an app for the meta using this kind of pen. That can open up really cool creation tools. There is also the minority report effect. That needs to be measured. Meaning that having your arm and your hand in the air for an extensive period of time is very tiring. We will see how the 3D artists are coping with this. Very interesting to see. It's great to have this kind of little device. Not that expensive. And see what people can do with this. I guess we all did in the past some experimentation. With tracked pen in caves or VR headsets. For some use cases. Very nice to see that it's all manufacturers with big names. Like Logitech. I would love to see how the haptic feedback tiles are done. And how precise they are. Because that can also bring a lot to the experience. And having maybe more feedback on if you are touching something or not. On your 3D model. And I guess a nice thing would be to draw quickly something in the air. In 3D. And use an AI tool to generate a model based on that. That can be very powerful. Even if you can get an object in your environment that you like. And you want to reproduce or change a bit in terms of design. To get it into your 3D environment. That can be very nice. Okay. Anything more to add to this topic or another? We are good. See you guys next week for another episode of Lost in Immersion. See you. See you. Thanks.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}