Welcome to episode 41 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello guys. Fabien, if you want to start, please. Hello, thanks. So today I reviewed for you the game named Asgard Wrath. It's an RPG game which was bundled with the Quest 3. So all the people who purchased the Quest 3 recently have that game. And it was just released a couple of days ago. So it's the second version of that game and it's an RPG game. So you have all the characteristics of an RPG. You can choose a character, you can gather objects and improve your skills. And you have an inventory that you can play with different weapons. You can choose where the weapons are on your body. And because it's in VR, the way that it works is actually very intuitive. Like if you have a weapon that is behind your back, you can take it as if you were taking it in real. The shield as well. So overall, my first impression playing that game was it's very easy to use actually. And you can see here there is a map that you can display on your wrist. And just doing like this on your wrist to display the map. And it works really well. It's a very small, very simple interaction. But it's really cool to do. Yeah, like this. So if we move ahead a bit... You can see... Yeah, here. You can see I can throw an axe. And this as well, that gesture feels really, really natural when you do it. And... Yeah, it works really well as well. So as you can see, the quality of the game is pretty good. But it's supposed to be actually better. So that game works on Quest 2. And in the next days or weeks, there will be an upgrade for Quest 3. So it's already pretty nice. And hopefully it will be even better on the Quest 3. So let's see. You can see different type of interactions. Here, let's see. And... So you can see I was pretty close to my guardian. A bit too close, actually, when I played the game. Let's see another fight here. What about the motion sickness? So you have... Yeah, thanks. So I played it during around 8 to 10 hours over the weekend. And yeah, I started to feel a bit of motion sickness after playing a lot. But overall, I kind of get used to it after a couple of hours. But yeah, the first day, at the end of the day, especially when you move a lot, because you are staying still, but you are moving in the game, yeah, I was starting to feel a bit of motion sickness. Especially during this type of fight here. So I think I mentioned it, but I'm not really good at video games. So my strategy was just to move a lot to avoid being hit. And during this kind of long fights, yeah, I was starting to feel a bit of motion sickness. But it was not that strong. Overall, the performances of the game were really good. It's like the refresh rate and the FPS are really good as well. So it was a very nice and pleasant experience, actually. And how many times did you run out of battery? Oh, so basically, yeah, a lot. So I played during 30, 45 minutes. And every time I just put the quest back on charge, just to be sure. To be honest, I didn't look at the level of battery, like how much it was decreasing. But yeah, if I didn't put it back to charge, it would be out of battery pretty quickly, yeah. Okay. So it was about an hour of session, then recharging it, and then one hour of session, something like that? Yeah, it is kind of, yeah. And so it's a complete RPG. So you have sequences of fight like this, but you have also a puzzle to solve to move forward. So it's pretty cool. What do you think, Guillaume? Well, to be honest, this is the first time that I've seen this video and image of the game. You mentioned that it's supposed to be a AAA game. However, for every single environment you showed, I found the game quite empty. The environment seems to be nice, but there are not lots of stuff around. I'm comparing it to something like Half-Life Alyx, for example, which is, to my opinion, the best game that can be done in VR to this date. So what about the environment? Do you have a better environment as you are going forward in the game, or are they doing this to be up to the performances required to have a great experience? Yeah, it's possible that they are doing that. Indeed, most of the places that I went are quite simple. There is a kind of like forest type or jungle type environment, which was more complex with water, plants, trees, stuff like that. So I don't know. I don't know Half-Life. I didn't have a look at it yet. I will have a look to compare. I think you need to compare with Red Matter 2, which is running on the headset, because Half-Life Alyx is running on the computer with a link. I don't think it has been released as a standalone application that you can download on the store. I don't think the performances will be there for this kind of game. So maybe we need to separate the AAA games for standalone headset to AAA VR games that run on PC with a cable or on Wi-Fi locally. Yeah. And speaking of which, I have to say that just having this and being able to play such a game, it's really cool. I was able to give it to my wife. She was able to play very quickly. So not being linked to a computer was very valuable. And there is no multiplayer on this game? Or is it? I don't think so. I don't know. So yeah. In summary, I really liked playing this game. It's pretty fun and gets pretty addictive as well. I'm trying to move forward as much as possible, discovering new capabilities and stuff like that. So the storytelling is well done? Yeah. Sorry, just one last thing. And maybe it's because I'm not really skilled at video games, but it gets pretty difficult. I chose the middle level of difficulty. And yeah, it's quite difficult. As you can see. Okay. So if we don't have anything more to add, maybe Seb, you can get up to your topic. Sure. One was the first we want to talk about. I think Fabien, you wanted to talk about it. Maybe you want to go for one on this subject? Yeah, sure. So we explored during this podcast or the past episode, a lot of different ways to move in VR. And so, Seb, a few months back, you tested that. But then you were a bit difficult and quite expensive. If it's new or if it's an old concept, but this kind of roller blade in VR. And I thought it was a pretty interesting idea to maybe have a more natural movement and something maybe a bit less bulky, a bit less expensive than a treadmill. So, yeah, I'm curious to know, maybe, Seb, what you think about this kind of device and if you think there is a real value. Yeah, like I said, when I tested the treadmill, I feel like it's for specific use case, it's very expensive and you don't have the real feeling of walking freely. You have to really force the way you are walking. And when you stop, also, you don't stop immediately, you move a bit. And it's a complex setup, really hard to calibrate. Here, it seems like it does exactly the same with a simpler setup that you can wear directly in your living room. So, for me, it does the same. It has the same issue. When you stop, you can see that the feet are still moving. You still have to force a bit the way you walk or at least it seems when you look at the user using it. But if it's less, way less expensive and way less difficult to set up this kind of system, then it's a win-win compared to what is actually the treadmill solution. So, for me, I would love to try it and compare it to what I felt with the treadmill. So, maybe at Laval next year, we'll see. These are there. What about you, Guillaume? Just to answer the question of Fabien, this is not the first time we've seen these kind of devices that are attached to the feet. However, it's maybe the best integration and the most compact version I've seen so far. So, this is very interesting on this part. And yeah, of course, it must have the same default or defect as the treadmills have. However, as you mentioned, it requires less spaces and obviously, it should be less expensive as well. However, as you are seeing it, these are wireless. So, they should have batteries. And now, we all know that the issue with those kind of devices is the autonomy. So, if you have your MetaQuest that can do only just about one hour and your shoes can do only 30 minutes, yeah, your experience is not that long in VR. So, obviously, there will be some improvement on that part. We are all waiting these better batteries for all these kind of devices. But yeah, it's encouraging to see that maybe we will find some new ways of locomotion in VR that are more natural. I guess we discussed about it earlier before we started the podcast that these kind of devices seems to be using a more natural way of moving. However, if you are not walking as we are walking naturally because it requires to do some slight difference in here, it could be helpful for our knees or hips. And we should be very careful about the use of these technologies. If you tried the CAT VR or CyberWheel, those kind of treadmill where you have to slide with your socks on a special surface, you will know that it doesn't feel natural and you can have some muscular harm if you are using it for too long because it's not the way you're working naturally. And here, clearly, we can see that he's not walking steadily and confidently as when you walk naturally. Yeah, he's moonwalking, in fact. Exactly. I was going to say that for the moonwalk, it's perfect to learn how to do the moonwalk. Right. That's it for that subject. So, on my side, I wanted to talk about this Meshi and Unity plugin that allows you to generate models directly in the editor and textures them also directly in the editor. So, this is a video showing a guy using it in his environment to add content and quickly iterate and change the color of the texture and change the color of the table, but also generate some world, some furniture inside the place. And it's directly, what is really amazing, it's directly usable inside the game editor in Unity here. So, yeah, it's really fastened the way to iterate and create and generate 3D model without even having the needs of an artist to help on that part. Or a 3D artist, I mean. You still need an artist to have a vision of what you want to do, but you don't need specific skills to make the 3D model and take some time to do the UV texturing. It takes a lot of time. Here, it's all automatic and very fast to generate. And the end result seems really nice and already optimized for VR, for example. So, that's also a task that is quite hard. Sometimes you work with graphics and at the end, when you have all the assets set up in your scenes, you realize that you need to do optimization everywhere and break a bit the quality of your game to manage the FPS of your device. So, yeah, I feel that it seems to be really usable right now. So, I need to download the plugin and do some tests on my side. But, yeah, what are your thoughts on that? Yeah, it seems to be really cool. I didn't know that they were able to generate meshes, which is really, really nice. But what I think with this type of tools is that at the level of accuracy that the AI is right now on meshes, it really makes sense to have an AI generate textures because it's quite performance. So, you can have wood, metal. I mean, it's much easier to have something that really matches what you need, what you want. But with my experience with the notion of 3D meshes, even with a lot of prompt engineering, as we say, it takes a lot of time to really narrow down your model into what you really want. So, yeah, I'm really curious to test it as well. But I definitely agree on the texture part. I think it's really speed up the process. What about you, Guillaume? Just one question. Is it something that we'll have to pay for? Well, two questions, in fact. So, well, this is something that we'll have to pay for because now with Unity, we can be a bit... Yeah, we have some suspicion for some plugins. And the other one is, do you have information about how the AI is trained and how our objects will be used as a knowledge database for the AI, which is also another subject about the AI with Unity, that they are using the content for their own knowledge basis, which are not something that the 3D artists are willing to share or give? So, do you have information about this? I don't, but, yeah, definitely to dig into that to see what is the constraint behind that, yeah. And what is the policy and regulation on your data, your generation of model and on what it has been trained. So, I definitely need to dig into that. But, yeah, I wanted just to show off that that's coming inside the game editor to fasten the process of making environment and ease that part, which is great because, as we said, there is not a lot of environment and games to play with right now. And it takes a lot of time to develop. So, bringing that kind of tools will help, yeah, bringing more games to the, or more environment and more real environment to the party. Yeah. Yeah, speaking of that, I saw that Roblox also has, so it's not Unity, but they also have like a generative AI into the studio, Roblox studio. So, it's coming as well for this kind of platforms, yeah. All right. And my last subject was about this Google paper release about NERF, the Streamable Memory Efficient Radiance Field for Real-Time Large-Scene Exploration. Very long name to say that it's a streamable NERF. So, being able, NERF had an issue before, it's that it was too easy to load on different platform and be able to look at the 3D environment that you generate with pictures. And here they found a way to stream only the point cloud that are, or at least only from the position where you are in the scene, to stream only that data to win some performances and be able to navigate through directly in a browser. So, before it was like 0.25 FPS with normal NERF and with their technique. So, here it's a record but made with a lot of picture and re-sequenced afterward. And here they got the real-time at 144 FPS with their systems. So, here the camera navigates freely in the 3D environment. And yeah, we can see that it's only being made with picture, being taken at the place. And directly on the website, they offer the way to navigate directly to their system. And yeah, when you get closer, you see that there is not a lot of information there. But if you look at the place and the amount of points that are loaded, that's very impressive. So, Fabrice, do you have any thoughts about that? If you have seen the news? Yeah, I mean, it's amazing that it works so well over internet in browser. I think a lot of usage on virtual tour, virtual retail, movies, experiences like that, maybe a competition to the Apple Vision Pro special videos. No, maybe not, because it's only one point of view here. It's not 3D, but maybe it will come. So, yeah, it's pretty amazing. Yeah, it's very fun to see that. Just to see the fact that there is some kind of competition between two technologies, the first one being the NERF and the second one being the Gaussian splatting. And we are seeing that they are trying to correct one of the main issues with the NERF, which is the fact that it was not real-time. So they are trying to correct that. And of course, the results are great. So if we can have this kind of results being real-time and get this inside the VR headsets as we can do with the Gaussian splatting, I guess NERF would be back at the first place right now, which is not because now the Gaussian splatting has more traction, especially because it has this real-time capability. So let's see what can be done. And I don't know if it's open source or not, but it would be great if we could try this on our own. The paper seems to be open source, but I don't know if they released anything that is usable directly for us. It's using Google DeepMind in the background, so I don't know how much the AI tool is used on that. If so, I don't think it's open because DeepMind is not available yet. We talked about that, but that may be a way to show off why it's not available for everyone and show off what they are working on for the tools before they release everything early next year. We'll see. And that's it, Fabien, do you want to react more on that? All right, so we can move on to your subject, Gilles. This is a perfect transition. So you talked about NERF. I would like to add that Gaussian splatting is getting an upgrade as well. We all know that this kind of algorithm can be a real pain because when you are doing your capture, you have to wait for the training of the Gaussian splatting algorithm for you to see the results. Sometimes it can be frustrating because you spent a lot of time capturing your images and the result is not as good as you expected it to be. This new paper showed that they implemented SLAM on top of Gaussian splatting, which allows you to have some kind of real-time results of what you're getting with the Gaussian splatting. What they are doing is that they are using video stream. They are getting an image every three FPS and this image is used for the Gaussian splatting rendering. So instead of using classical point cloud that we've already seen in the past years when you are scanning your room with the SLAM technology, they are using this, but they are using the Gaussian splatting rendering at the end phase of this algorithm. Very interesting to see what you are scanning. I guess this is a real improvement of the whole pipeline for you to get something that can be usable with Gaussian splatting. They also added the fact that you can use RGB camera, so classical camera that we've been using now for Gaussian splatting, but you can use also RGB plus a depth camera. So I guess it should be better. However, the cameras they use, it's an Intel DepthSense, which is some kind of an old camera. So the depth sensor is not that good in terms of resolution. So maybe if you can plug in a better depth camera, we could have better results about this. So they will be releasing the algorithm in February. So we'll be testing it, I guess, next year and we'll do a follow-up on this. It's cool if it can work as well on LIDAR cameras like the iPhone. That could be a really nice add-on as well. Are you able, if you detect that one of the pictures that you took was blurry and it's impacting the wall rendering, is there any way to go back to all the pictures that are used to remove some of them? Well, I don't think that there is this kind of feature in this algorithm specifically. However, I've seen that it's Luma AI, I guess, that is implementing Gaussian splicing as well as one of their services. And they are providing the fact that you can select and erase the point inside the Gaussian splicing rendering that are not at their place or that are making some noise. So you can do some post-processing on your Gaussian splicing using the Luma AI. I didn't try it yet, but I will. Because we know that in Unity, for example, you can do some exclusion, inclusion space for the Gaussian blob or Gaussian splats for you to have a better result. But we are doing some improvement as well in this post-processing Gaussian splicing or enhancement of the Gaussian splicing display. So very interesting to see the evolution of this technology as well. Like you said, I agree with you, it's very painful to do a lot of pictures, go back to your computer and try it and see that something went wrong and you don't end up with the 3D model you wanted. So you lose time going back and taking back some pictures. So yeah, if there is some tools like that, that could help the process. Also, maybe measure that you are not moving and taking the picture only when you are stable, that could help too. Okay, so I guess we did all the topics that we wanted to address. So do you have anything more to add, guys? Yes, very quickly. Microsoft released Word and Excel for MetaQuest. So I tried it and as expected, the experience is not really great to be really honest because it's just simply like putting Word or Excel in Mixed Reality in front of you. And because Word or Excel involve a lot of mouse and typing on the keyboard, I mean, until the, I don't know, we have some kind of brain interface to type really quickly and the headset are lighter and have a better resolution. Personally, I don't really see a usage for Word or Excel in Mixed Reality. So yeah, that was just a very quick test. Thank you for your feedback. Is there any way to connect your keyboard? Yeah, I was going to say, is there any way to connect your keyboard and use directly the password to see your keyboard and type directly like what they foresee with the Vision Pro as a use case? So I didn't try that, I didn't go that far. But yeah, I think I saw pictures on the website that have a keyboard and it's really like, it's really full Mixed Reality. You just have the big window in front of you. So I'm sure there are ways to connect the keyboard. Okay. Okay, Seb, anything more? No, that's it for me. Okay, great. So this is a wrap up for today. Just for you to know, there won't be any more episodes for 2023. We'll be back on January 9th for a new season of Lost in Immersion in 2024. So see you guys next year and have a happy holidays. Thanks, happy holidays. Thank you, guys.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}