Welcome to episode 36 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go. Hello, guys. Hello. Hello. Good morning. And I don't know for you, but yeah. So Fabien, do you want to start, please? Yes. So the topic I want to talk about is something that is kind of gaining the attention of the industry right now, is 3D object generation using generative AI. So we've heard a lot about generative AI for text with Chad Dippity, the most famous one, and generative AI for images with like Mijony, Dali, and others. Recently the technology has evolved and is coming to 3D object. In the past, we've tested a few ones that we are not really convincing. It didn't seem really to understand the prompt that we were giving it. This week, Luma AI, a company which is famous for this app that allows you to capture NERF, and now they are supporting Gaussian splatting as well, they released their own model, which is, as I say, in a research preview to generate 3D models. So currently it's accessible through Discord, as you can see here, and it's very similar to Mijony, you just type what you want to create, and maybe we can just try it. I don't know. Seb, what do you want to create? An alien spaceship that is glowing in the shape of a bat. I'm creating an alien spaceship. Okay, and so I was quite surprised, to be honest, by how fast it is. So you will see in just a few seconds, the model will be created, and while it's okay, so here it is. You can see that, well, it's not perfect, but it kind of understood your point. I think the second one here is kind of nice. And what's really interesting with this one is you can actually download the model. So the workflow from creating, generating, and downloading the model is working pretty well. So I've created this cute robot, and what's interesting as well is you can give instructions. So the T-pose, which is a standard pose for starting an animation for three objects, is actually supported with more or less success, as you can see on the other ones, for example on this one. But this one was actually pretty good. So I downloaded it and opened it in Blender. So what's interesting to see is the way that the mesh is generated. So as you can see, it's not a low poly mesh, it's really like a high poly, and that I think is maybe how they managed to have generation using AI. Of course, having a perfectly symmetric low poly object must be very, very difficult to achieve with this kind of generative AI. But anyway, the model is pretty nice for a low poly model, and it's kind of funny, you see a lot of people having fun with this, like trying to create a small world by creating a lot of objects and gathering them into virtual worlds. So I think it's not perfect, as you can see, but it's a first step to having the ability to generate our own worlds here and our own objects without any 3D design skills. So yeah, it's kind of interesting to see the technology moving quite fast, to be honest. So yeah, that's the first. This one's actually pretty good, the Stormtroopers. So this is the first software I've tested that can go from A to Z. I think we mentioned this software as well, this service, so I didn't test it yet, but you can fine tune the selection that you are making. So select first if it's an object, animal or human, so it's really different from Luma AI, which is completely just using the prompt. 3DFI, we tested it a couple months ago, that was not really convincing. And something that I didn't manage to make it work yet is anything wrong. So what they claim is you just have to upload your model and their service will generate an animation on top of it. So I tried with the small robot, but it didn't work. So I need to put more research into that. So yeah, I'm curious to know what you think. Let me start with you, Seb. Yeah, it's impressive the progress in a couple of months and seeing those results. I tried the same prompt on Masterpiece X and to be honest, the result that you got on Luma seems to be way better than what I got. Particularly the texture on Masterpiece X, plus the shape is like getting shape of planes, actual planes that exist, weirdly. So here it went way more creative, I think. And yeah, when you go through the assets that they generated, it seems to be in a very high quality. So it seems like it's starting to be usable, yeah. So something that I forgot to mention is when an object is created and you like it, you can click on refine here and it will not refine the mesh, but it will refine the texture. So I just selected it for the spaceship that we created to see how it looks like. So I hope it's not too long. What about you, Guillaume? Yeah, it's very interesting because it feels like the first version of Midjourney, when we had those very strange results with the end and the faces, and we know where it's going now. As we can see, the great result we can have with Midjourney like, yeah, one year after because it was in November 2022. So we can hope that in one year or so, we can have very detailed and very effective 3D generation. So I'm very enthusiastic with this. I tried this this past week and it's very interesting indeed to see how efficient it is, especially for the 3D generation. It only takes a few seconds or minutes. However, yeah, the mesh is not perfect. So if you want to, as any AI tool right now, I guess this is a great first step towards your project. So for prototyping, it's an awesome tool, especially if you don't have 3D expertise. And if you are a 3D artist, it can give you a very good base. I don't know if we could have some feedback from 3D artists to know if this kind of 3D model could be of good use for them to start a project, maybe as a basic shape or maybe just give them some ideas for starters. I don't know if we could ask 3D artists about this. But yeah, overall, the workflow is really nice. For us that are doing augmented reality, for example, it's great to have these very quick assets that we can import and visualize. I saw someone directly use a 3D object inside an iOS app and put them in augmented reality just afterwards. So yeah, the workflow is very nice for anyone that is not mastering 3D or even for 3D artists to do some very, very fast proof of concepts or prototypes. So great work. And for the last slide that you tried with the animation, I guess you should have a specific skeleton for your 3D models. So I don't know if they are giving you the key for you to have this because the 3D skeleton for your object is not as you are requesting. Your animation wouldn't work. I think they are creating the skeleton. Yes. Supposedly. So yeah. Yeah. There also, it's quite challenging, I guess, depending on the kind of 3D model you have. They were showing an ant. I don't know how they can manage to generate a 3D skeleton from scratch, but yeah, why not? Yeah. And back to what you are saying about AR, actually, the website that they are using here, it's using Model Viewer, which is a free software that uses the... So I tried it and you can actually preview from the website your model in AR, just using Model Viewer. So yeah, it's working quite nicely. And yeah. So let's see if the generation is done. No, 45%. So yeah, I think as I was saying, it's a first step, but very soon we will have a lot of new possibilities. And as we mentioned, I think a lot during the podcast, one of the key components of 3D world and of the expansion of this kind of technology is the ability for people to create their own world and to create their own models and to capture their models. So recently with Gaussian splatting, NERF and these new technologies, I think it opens up a lot of interesting opportunities. And so, yeah. Anything else to mention on this? No, I expect that it's, despite the fact that it's very interesting and we hope to see some evolution in the upcoming weeks. We'll be waiting for your take-sure to finish and maybe Seb can jump on with this topic. So on my side, I wanted to talk about the experimental feature that was deployed for Oculus for the Quest 3 and which allowed to have other soft occlusion. The soft occlusion looks way better in the headset. Here they're showing the end of the user, but also the arm can be masked and really everything can be occluded inside your environment. So I did a couple of tests on my own and I tried to put the object behind my screen and put my hand in front of it. So as you can see in the record, it's not that perfect, but because it's recording only the left eye and you don't get the 3D in the headset, it looks way closer to my arm. And so as you can see, the occlusion and the unshoring of the object inside the real environment with that, it makes way more sense. You don't have to worry about having a scan of your environment sprayed on. It can be done live. So I think it allows a lot of things. I need to try it with someone walking in front of an object too in the real life. That's a test I need to do, but with a bigger object because right now those objects are too small to really test that. So that's the next step for me to test it and see how it handles this kind of thing. And if the user in front of me is waving his hand, for example, how it reacts, how fast it changes the occlusion. So this was my first topic. I don't know if you have any feedback question on it. Yeah, yeah. I think when we first tested the Quest, I think one of the biggest questions that we were wondering is when we first put it in the Quest home page, there are no occlusions immediately. So is it like they are rolling out new updates to the software that allow that or it's just like something that developers can actually start to use? That's something that developers can start to use, but it's an experimental feature. So you even have to type an ADB command to allow this to be enabled on the headset. And you have to be after version 58, I think, something like that, on the headset. To answer your question, I just want to bring maybe a complementary subject here. There were an interview of a Meta executive last week. It was a Q&A. And during this interview, somebody asked if they were willing or are they reflecting about improving the mixed reality experience. And they confirmed that there will be updates for us to have a better experience, especially correcting the distortion and those of depth and occlusion tools as well. And one thing that was for me very interesting or intriguing is that he talked about the fact that they are collecting data from people using mixed reality, especially for lighting and volume as well. So nobody catch the ball at this moment, but it's a bit worrying, I guess, because it means that at some point, Meta and they are using this to make some kind of AI or their algorithm for occlusion and distortion to learn from what people are doing with this so that they can adapt and provide a better software solution to correct those problems. So at this point, nobody, yeah, it didn't make that much noise about the fact that some data are, for my part, I think those are sensitive data, especially if they are volumetric information about your environment. But I don't have much information because it's very, very, it was last week. So I don't know if somebody will make an article about that or ask more questions. So it's just an information as you are mentioning Meta. And the other fun fact is that when you are waving your hand in the video, we can see this kind of depth shadow, as I'm calling it, when you have your, like a non-treated area around your arm. And it feels like what we had when we were using Kinect. So it's just a funny way of seeing this effect back, because I guess they are just using one single depth sensor. And because of the angle of the projection, of course, you have this kind of shadow. So, yeah. Well, like I said, you see only the left eye. And when you are seeing the both eye, it looks better. However, the aliasing and the border, yeah, remains. But I tested it, the same kind of experience with the Vario. And I feel like the aliasing and the depth occlusion on it are more sharper, more, yeah, more aliasing on the Vario than what you can see here. Yeah, maybe it's just a resolution issue. I guess Meta didn't put a very high resolution depth sensor compared to Vario. And of course, yeah, you would have this kind of aliasing. But what I'm saying, it's, I feel it's better on the Quest 3 than the Vario. Yeah. Also, something to mention about Meta is that they updated and enabled a feature to allow the user to win some battery on the headset. But then you are putting all the rendering back to the Quest 2. So, yeah, you have more battery, but a lower, way lower quality in the rendering that you get inside the headset. So, not really usable or really for fans that want to play a game for a long time, a specific game for a long time. So, that's it. That's it for me. Do you want to move on, Guilhem? Oh, yeah, I thought you had another subject. I was searching for depth sensor resolution, but apparently it's not a very broad, a very easy to find. So, I guess I'll find it later. Okay. So, about my subject here. It's not much of a technological or hardware innovation. It's about the use that we know for decades, I can say now, is that VR is well known for mental health treatment, especially for military people that have post-traumatic diseases. And now they found a new way of using this as they are sending VR headsets inside the International Space Station. So, two things here. First of all, this is the first time that they are mentioning that astronauts have mental health issue inside the International Space Station. I guess this is a secret well-kept, but apparently they could have some issue up there. So, they found, they are sending an HTC Vive Focus 3 there. They tested it in zero gravity planes before that, and apparently the electronics are working well. Maybe they made some adaptation to some components, but they are sending headsets in the next travel to the International Space Station. And basically, it helps the astronaut to change their environment and feel better. Maybe at some point you can have some claustrophobic or feeling that you are, yeah, it's not that big. Well, if you did the ISS application on the quest, you can see that it's not that small because you have a lot of module now in the ISS. But I guess when you are spending several weeks or months there, you can feel a bit lonely or whatever. So, very nice to see that this use of VR for mental health is still ongoing and it is deployed as a real solution for space professionals. So, very interesting to see. And to my part, I think this is a new step for VR as well, as it is getting more common to use this technology to help people with their health issue, and especially mental health, which is one of the most obvious use for VR. And as I mentioned, we've known this for quite two decades now. So, very interesting to see that it's now finally deployed as a real solution for professional. So, I don't know if you have any other things to say about this. Yeah, I think it's very interesting to, as you were saying, like first that the hardware is working in low gravity conditions, and that the real applications for mental health are starting to gain traction. Something that is quite noticeable, especially in the Oculus Store on the MetaQuest 3, is like one of the first categories is like wellness. So, wellness. I'm proud of you. I think that VR is useful. Yeah, I guess. Thank you. Thank you. There. Okay, so yeah, it generated a texture that is actually pretty good with good details, and that matches the spaceship pattern that we asked. It matches the bat pattern and the glowing as well. So, pretty nice. Yeah, pretty nice. Good job. So, the last topic is about another pair of glasses. Those are called Nemo. They expedite mesh. The compiler is doing basically the same as the web host. Once again, we can see that those worlds should continue on. It's very dedicated for you to work with. They took the field of human consideration in their presentation. All the things that are mentioned, every single tool they are using inside, they have a 60 degree of freedom. It's real. Basically, we have... Here you can see 3D spatial UI. This is basically all we have. It's very fresh news. They can reserve your assets right now, but I don't think we have a release date yet. Make a nice transition with what we talked about last week. Maybe our... I was about to say friend, but he's not met. Charlie Frink, who talked about the fact that 2024 and 2033, they are the years of assisted reality. With those smart glasses, he may be right about this. We are seeing more and more devices like this. They are oriented towards assisted AR, meaning that you can work without a screen, with a very light hardware device. I don't know what you are thinking about this. It looks really nice. It looks very Apple, so they used the styling of Apple websites and Apple OS, the visual OS, sorry, and the worlds as well with spatial computing. It looks like a really cool device. Again, similar to what we mentioned last week about X3, if the field of view is... If the field of view is... Do you have anything more to add? Nope. Okay. I think this is it for today. I know that Seb is working as well on Gaussian splitting, so hopefully we could have a special episode next week, because I have a lot of stuff also to tell you about. Especially the best practice. I guess we finally can have some very good guidelines for us to know when to use what technology to get the best results with Gaussian splitting. Hopefully next week we'll have this Gaussian splitting special episode. Until then, have a nice week and see you guys.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}