Welcome to episode 54 of Loft in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go. Fabien. Hello. Hello. Yep. Thanks. Okay. Okay. So today I have two news from Meta. So one is about AI and their understanding of AI, how AI understand the world. So they released this new like paper research system. And I think we talked about that in an earlier episode of the podcast is how through like AR glasses, you can ask questions to the AI, for example, where did I leave my keys? And the AI will have kind of a memory of what you looked at and will be able to answer the question. So that's one type. So if for devices that you wear yourself. So it's kind of having a memory on the AI systems. And the second one is the one you can see here, which is an active searching. So instead of having a memory, the system in case the robots will search for something into the space. So it's not really clear. And I think my knowledge of the language is not good enough to understand correctly if they actually build a model or if they are just like evaluating the possibility of actually doing this. But they evaluate the performances of a few like visual models like GPT-4 vision and other on the performances that. So yes, you can see like GPT-4 and GPT-4 vision and the result is actually not really great. It seems like the understanding and the memory are not really yet at the place where they outperform a human in these tasks. So I find it quite interesting because hopefully this is some kind of technology that will go into headsets and into AR glasses. So, yeah, basically the result is it's not there yet, but there are improvements that they are making. So, yeah, that's the first news. One thing I found funny and that I've seen quite a lot on AI is to evaluate the performances. They use another AI that is able to evaluate the result. So basically, you know, a good example is it's easier for someone to say, like, for example, that a website is correctly working than to actually build the website from scratch. So it's basically this idea like lower performing intelligence, quote unquote, can evaluate something that is more complex than it can actually do. And this is what they used for this evolution as well. So, you know, in short, like GPT-4 could evaluate the performances of GPT-5 and so on and so on. So, yeah, that's it. I wonder what you think about this and how this could be applied to what's our main interest here. So AR glasses and headsets. Seb, maybe? Yes. Thanks. It's quite nice to see the progress in that area. And it's nice that they are clear enough to say it's not ready yet. But, yeah, it's the next step in using AI for this kind of task. So, yeah, can't wait to be able to use it for a real use case. Right now, like you said, I don't think it's ready yet. Yeah. One thing that I just remembered is we spoke a few weeks ago about a kind of headset that could help like blind people. And I think this kind of feature could actually be really, really helpful for, you know, for blind people or for people with disabilities and stuff like that. So, yeah. I guess it's possible to play hide and seek and ask the robot also to hide some object in your place and try to find it. Just to exercise. Could be nice. Yeah. Guillaume? Yeah. The first thing is that when we spoke about this feature like a few weeks ago, I guess it was not Meta that made this paper. So, just maybe I'm mistaken. But, yeah, I think this was another research lab or maybe they just bought the lab or it was a hidden collaboration at some point. After that, I'm just reflecting about this kind of feature. First of all, like always, when you have this kind of new announcements, you think like, oh, yes, that's a great feature and we can do a lot of stuff. And then you reflect on what can I do with this? Really? Despite the fact that, yeah, when you lost your key, it could be great. But it's not that often. And it should have a long term memory to get this. Because when you are losing something, usually it's not in sight and you have to remember very strongly where you put it like several weeks ago. Of course, the use case you mentioned with the person with disabilities would be a huge improvement in their daily life, of course. But when you are talking about everyday use case, I don't see how it could be applied. However, when we see this kind of feature, it's obviously what they want in their future smart glasses slash, I don't know, everyday life immersive device. So, be prepared to have an AI scanning everything that you'll be doing with the headset 24-7. Just like that, I'm thinking about a lot of data. And it can be some very strongly confidential data. And we already know that when you are using a meta quest 2 and 3, some of the environment information are sent to meta for them to improve their model. But we don't have much information about exactly what they are sending to their company. But yeah, I guess privacy won't be that much private in the future. Unless you work locally and don't send anything. Yeah, this is the official message. But apparently, they have some stuff going on sent somewhere. They said it was not a picture of your environment. Those are just location in the space. But when you see the future definition of the scan, you can see that even if you are just sending the 3D coordinate of the mesh, at the end, you will have the whole scan of your home. Even if you don't have the colors. With segmentation. Although one place I would like to add this kind of tool is when I'm building something, I'm always losing my hammer or my nail or my screw. So if you can tell me, you put it there. You have to wear a belt. That's true. And the next piece of news, which is very short because we don't have much information, is that Meta is also working with schools to bring a new product, that we don't really know the details yet, for education and to teachers. So the introduction video shows really fake apps and situations. What they talk about... Sorry, first, in the video, they show the Quest 2 and the Quest 3. So I think the Quest 2 being very cheap right now. I can understand how it's appealing for schools, which usually don't have a lot of budgets. So here I kind of understand their way of using the Quest 2. And the only details interesting that we have is here. So this new product will make possible for the teachers to manage multiple Quest devices at once, without the need for each device in a classroom or training environment to be updated and prepared individually. So I think it's similar to a device management system, where the teacher will be able to maybe go through apps or load an app, and force this app to stay on the students' devices. I don't know, it's a bit blurry, but it's coming this fall, it seems. So yeah, I don't know, Seb, Guillaume, if you saw the news, and if you have any opinions or ideas about that. Yeah, I saw it. I think it's nice that they thought about the management system, so it's already embedded into it, and you can control exactly what the students are looking at. And I think for students, that's a great way to be more involved in the learning course that they receive, and remember better whatever they look at, be more involved and more dynamic. And yeah, I think it's the best way to learn something. So if they make that affordable, that can be us to look at it, to see if there is some development to do for this kind of use case. But with all the experiments that we do, we try to do multiplayer and sync all the headsets together nowadays. That's what for me makes sense also, is to be able to share the same experience with some other people, and not be alone on the headset. So yeah, overall, that's a nice move from them to implement that directly. One thing, so in the video, we see shared experience, like all of them. Everyone is one of the teachers on the iPad, and the student is in the headset. One thing that they say is they will not develop new content, it seems. It's like only a platform to gather content, if I understood correctly. So yeah, as usual, I think Guillaume, you have usually a lot of things to say about that, that they don't develop content. They are not good at it. So yeah, obviously they know their weaknesses. But yeah, just a few points about this. So of course, the educational field for VR is one of the main, I guess, to be used for. As you mentioned, the budgets are not there very often to deploy these kind of technologies. However, I'm not sure about the presential use of VR. As we imagined a few years back, when you have a bunch of headsets in a case, and students are getting their headsets and putting them on class, and putting them back after that, and other classes are sharing them around. As we know, the VR headsets are individual devices. So for me, the use case for education right now, as students have more and more headsets at home, is that it's more on a remote use. Meaning that they would be using it at home for their homework or dedicated exercises. Because with the pandemic, a lot of schools are now providing remote programs. And the issue that they are encountering now is that, great, we can do remote stuff, we can have students internationally. However, all the practical work can't be done anymore, as before they were in the real world, if I can say this. But the fact that people have now VR headsets, and everyone could be connecting to a unified platform, would make VR much more sense. Because it would bring back the presence of each student and the professor as well. So I'm sure education is a great field. I'm not sure the use case they are presenting here is the one that makes the most sense. It's between the old school way of seeing education and what could be done in the future. But of course, I think this global platform for the professor to see what the students are doing, and maybe have some management on their devices. But it would be more complicated if people are at home. And I guess a platform like Microsoft Mesh would make more sense to me. Because you have this team option where if you have these VR headsets connected to your computer, you just have to click one button to switch to the VR view, and all the participants have their avatars and they are all working together. So I guess this kind of platform makes more sense than what is presented here by Meta. But let's see what they'll be doing with this. Great video, but everything is fake. As they are, they are used to the Meta Pro as well. They did a very nice video, but not that great at the end. Instead of being a Horizon workroom, it should be like a Horizon classroom. Yeah, exactly. Okay, well, Meta, if you hear us. Okay, I think that's it on the topic. I don't know, Seb, if you had anything else. No. So over to you. All right. So last week I went to Laval Virtual and I was able to test a lot of things. Everything was about VR, XR, multi-reality, mixed reality. A bit about AI, but a lot less than at Mobile World Congress at this time. And I will only highlight the thing that I enjoyed the most being over there, like this simulator done by MotionXP. So they built a platform with electric valve that works perfectly. I was able to test the simulator and they are showing it with F18 simulation. And I was worried about, I'm quite impacted a lot by motion sickness. So I was worried I would be completely sick for the whole day after that. Because they are forcing people to do like looping and free. I don't know how you say that in English. All the way with the plane. And they use a real nice effect that they use the seat belt to put you in the seat. So further away in the seat and it's straightened depending on what is your action in the simulation. And it was perfectly to make the brain feel that you are really doing a looping or going under or going down and accelerating or decelerating. It really gives you a nice effect. And I have to say that the balance and the smoothness of the whole device is quite amazing. And compared to professional ones that are way more expensive, this one cost only 10,000 euros, 10k. If you buy everything, the whole part, but you mount it yourself. Or if they come and install it for you and calibrate it for you, it's 15k. So compared to other device, I think it's quite a fair price. And really the sensation being inside on the platform is awesome. And they use a Vario headset, an old one, an Aero one, which was the public version of the Vario. And the quality of the VR headset is also kind of amazing. It's a pity that they don't sell it anymore because for this kind of experiment, it's nice to have this kind of resolution and wide field of view. Okay, so if I understand correctly, in addition of the movements of the seat, there is also the seatbelt that gets, the tension that gets changed depending on the movement. Is that correct? I found another video where we can see the seatbelt, how the belt are plugged to something that is like rotating and tighten the belt. So here you can see it right here. Okay. Cool, that's clever. What about the noise of the system? Because you wear a headset, it's not noisy at all. And the way they give you instruction is that they use a microphone and they talk to you through the headset. So you feel really like wearing the helmet inside the seat of the plane. Okay, so you're enjoying it, but not your neighbors. No, overall, it's not that noisy. We were just next to it and it does like... That's really not that noisy. And the fan that they put there is because they have also a version where you can drive a plane that has like an old plane without windows. So you can feel the air coming in and depending on your action, the fan are blowing air differently. So yeah, the overall sensation, like I said, is quite amazing. I did not want to step out of the simulator, which is amazing. I really felt I did a couple of experiments like that where maybe the calibration was not perfect or the frame rate inside the headset was not good enough. But after doing that like 10 minutes, I was completely sick for the whole day. And here on this one, no issue at all. Even though I was doing looping, like I said. And it's very dynamic and very responsive. So when you move the thruster and accelerate or decelerate, you really feel all the sensation right away. So is it just for plane simulation or you can do it for cars? You know, there is a whole trend now with VR. Well, not VR, but immersive driving, especially for rally. There are competition right now and people are creating their own rigs with actuators and stuff like that with very nice setups. So it could be one of their business case as well. That's where they're selling it for arcade and stuff like that. But they have a rally version of it too. So it's the same platform and it's compatible with actually a lot of game already right away. I could find the name and give it to you, but there's a lot of game compatible and you only plug it, you plug your PC to the whole system and that's it. All the effects are already in place. And they have here, right here, they have a Thrust Master. So they just removed the wheel on it, but there is a driving wheel that goes onto it and you can then play a car game. Thanks. So that was the first one. Another nice experiment that I did was this one developed by Clarity, where they set up a lot of webcam, Logitech C920, like 32 of them in front of you. Like 32 of them in front of you. You have a button that you push with your feet and it takes one picture at the same time from all the different cameras. And then it's using it to generate a Gaussian splatting of you that you can then check in Mixed Reality with Quest 3. And it's displayed also on the TV. And so the result was kind of like this. So with the kind of camera they were using, I was kind of impressed by the quality. I guess if you go up with better webcams or better cameras, you quickly obtain a way better result. And another interesting thing is that they plug that to an AI system that look at you and kind of judge you. So look at your clothes and tell you, and you can ask questions directly. It's a conversational AI. So you can say, what do you think of my clothes today? And it would reply, oh, your outfit is quite nice. Your hat make a nice look on you and this kind of thing. So really interesting to see the overall experience being displayed there. And the way they did the AI conversational things was quite nice because during the time the AI is trying to find answer to your question, it's because they put an alien avatar. It's first tried to do some gibberish alien voice sound. So making it look like it's translating the conversation. So it feels less long when it answers because you already have the feedback that it's doing something in the background and just not a loading screen in front of you. So yeah. Very nice experience overall. And you are able to scan with a QR code and get your Gaussian splat display directly from the booth. So yeah. I don't know if you have any thought about that. Is the AI only saying nice things? I did not hear any bad things. So maybe it does, but not on me. Yeah. I guess, what do you think? I think a couple of months ago you tested a full 361 with much more expensive cameras. Do you think that the gap is closing with this kind of tool or it's still a huge difference? The difference was that at the end I had a mesh on the other experience. So I could use that as an avatar and really rig that quite quickly with Mixamo, for example, and be able to use that in a 3D experience. Here you just got a statue that you can display in 3D. But right now I don't think there is any way to easily rig a Gaussian splat. So that's a limitation here. But what I would like to see is there is any AI that takes in as a source a Gaussian splat and allows you to change the outfit, for example, of a person. So you can have a statue, but then change his look and feel. What about you, Guillaume? I won't comment much about this. The integration is fun, but I don't think there is much innovation about this. Because when you are breaking down every single block of this app, there are all stuff that you can see on the shelf, but putting them together is kind of fun. That's all I'll be commenting about this. All right. And the last one was this update from Meta, the V64 that really increased the quality of the pass-through. I was able to test it, and I think Fabien was able to test it too. That really solved a lot of warping and weird issues around your hand and the closed object, and you start to be able to read a device in front of the headset. So look at your phone and be able to read the text compared to the previous version. So it's quite a nice update, actually, that we are pushing in our experiences right now. Yeah, it's really increased the experience for the user. It's not yet perfect. It's not at the point that we can say it's completely like the Vision Pro, but that's a nice improvement. Although there is slight, I could tell on one of my app that there is a slight loss of performances. So I could see a small gap of like two or three FPS on my application. But overall, I think it's better. Yeah, I confirm that. I mean, I just tested a couple of minutes ago. I'm just trying now. And yeah, it feels much more comfortable to use. Of course, the edges are still distorted, but the center view is much more comfortable to use. Yeah. I find it surprising that they waited so long to do this kind of correction because, yeah, this issue is there from day one. Maybe they are gathering information for them to be able to do a mean warp for everyone because I guess the distortion is different from one headset to another. But I really wonder why they are waiting. It's like maybe six months now since the release of the Quest 3. I don't know why it's taking them so long to have this kind of warping correction. But yeah, maybe in one year it would be better. But yeah, when we are seeing the... I guess Fabien can testify on that. But yeah, when you are looking at the two side-by-side comparison, you don't see much of a difference. You can see it's a little better, but it's not a tremendous improvement. When I saw the news, I first thought that it would improve much more than they are showcasing. The issue is that when you record, you don't get exactly what you see inside the headset. So the only way to really showcase that would be to have a camera inside the Quest and showcasing really the effect. And on top of that, you won't be able to see the stereoscopic effects. That's really only wearing the headset that you can see the real difference, I would say. And yeah, that's it for me. Just one question. I saw some feedbacks about the Laval Virtual. And a lot of people were not complaining, but just making the remark that there was not much AI. And they were quite surprised that there was not much project about this. What's your feedback about this? I think it was more about professional use case of mixed reality and virtual reality, mostly. There was a robotic showcase of a person using a Vision Pro, I think it was, to control robot arms. But all of the other use cases that were presented there was all about training and business use case, mostly. And AI were not the subject at all over there. OK. Except this Laval Virtual, this Clarté experience where the AI was talking to you. It's surprising because there is another trade show in like a month. I can't remember the name. But when you are seeing what they are showcasing, yeah, it's very AI-oriented for immersive technologies. So just to confirm that feedback. OK. I think it's really made this event like a specific event for virtual experiences. So Laval Virtual, the name is sticking to that. And so people don't foresee this kind of event for AI showcase. OK, great. So it's my turn? OK. So just two quick news. I don't know. Yeah. We talked about it last week when Fabien made a presentation, a demonstration of the Apple Vision Pro and the MetaQuest 3 that are compatible with WebXR. And they just announced the Khronos group who is responsible of this whole initiative of OpenXR. They just announced that they are releasing a new version of the OpenXR platform. And basically what it does, you have more interaction and more spatial configuration for all those devices. So it will be available for the Apple Vision Pro and Quest 3. I guess I checked that yesterday, and there was not updates for Unity about this yet. But it will be very soon, I guess. And another news just to finish, which is purely AI. I'm sorry it's not about immersive, but at some point you can see where it can help us build new immersive application or immersive experience. You may have heard about Suno, which is an audio AI-generated application that is embedded in Copilot. If you have Windows 11, you can access to this. You just have to download the app. But just, it was last week, audio beta was announced and the access is now free for I guess a short period of time until it is in beta. But it's very, very powerful. You can generate AI songs with a simple prompt, just choosing the style, and you can have also your own lyrics here. So if you are like me and having difficulties to find the sound or music when you're creating your app, I guess this is a great way. And in any case, I highly suggest you to test it because it's really fun to use. So I shared a couple of songs with you last week, like a metal song about virtual reality in French. Very surprising that it supports French language as well. But yeah, very, very interesting to see that you don't have this buzzing or small noise that you can have with audio AI-generated samples. It can be 30 seconds long, but you can add them to create a longer song. So very, very interesting. Yeah, I think if I'm correct, the big problem with audio generation is like the continuity over a long period of time. I think the first models they were generating, like it works for like one or two seconds, but after it just like completely lost the continuity. And it's amazing, like the song that you sent us, 30 seconds. Yeah, very, very interesting. You can do what they call a remix. So you're putting this as an input and you can generate several clips from the style and voice that you've been doing. So I see two issue now is that the servers are completely overloaded. So it's very hard for you depending on the hour when you want to try it. It can be hard for you to get the sound. You are limited, of course, in the generating prompt. And what was the other thing? Yeah, you can make it hallucinate or go completely berserk very fast, especially when you are doing your custom lyrics in French. It was in French, yeah. Yeah, at some point you are asking because it generates you the song, you have the lyrics, the voice, and after a few seconds you realize that it makes no sense. It's just like blah, blah, blah, blah, blah, blah. But it's hard for you to understand it at first. It's very interesting, very fun. So that's it. Do you have anything more to say? Yeah, very quickly. I don't know if you are using the Chat Deputy app on the phone, but talking about sound generation and the way that Chat Deputy talks, if I can say it as if it was a person, but it's actually very amazing. You have the um, the like, all that filler sounds that are usually what I'm doing right now, actually. And Chat Deputy is doing that very well and it's very impressive. So on that topic of audio generation, yeah. Okay. So I guess this is it. So see you guys next week for another episode and see you. Bye. See you, bye. Bye.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}