Welcome to episode 33 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello, guys. So, Fabien, if you want to start, please. Hello, yeah, thanks. So, today I want to talk about headsets again. So, the news front are very busy with a lot of headsets currently, with especially the release and the availability of the MetaQuest 3 last week, that we talked about in the previous podcast. There are rumors that keep going on as to what's coming next year. So, obviously, the Apple Vision Pro, hopefully, is coming next year, and will be available at quite a very high price. So, the rumors that appeared this week are actually on the opposite side, so on the cheapest headsets. So, there is a rumor for a Quest 3 Lite, that will be around 200 US dollars, and seems like it will be a VR device, so no mixed reality. The chipset will be the same as the Quest 3 for the Snapdragon 2, and it seems like they will have the Quest 2 lenses. So, that puts an image of, I think, yeah, it's very cheap, but of course, for that price, I mean, the quality will be, especially because of the lenses, the quality will be probably quite low. And on the Apple side, so it's 10 times more expensive, even the cheapest one, so it's around 100, 500, 1,500, sorry, or 2,000 for an Apple Vision Note Pro, and with a smaller CPU, so probably the similar CPU as the one that are in the iPhones. And no outward screen, and probably cheaper materials as well. So, it's quite interesting to see that there are a lot of availability on the headset, the expensive one, and there will be, hopefully next year, a very cheap one, and I think we discussed that quite a lot here, and my opinion is we should focus on content, on community, on everything around the headsets, and that if the industry is only focusing on making cheap or very, very good, as good as possible headsets, I'm not sure this is the solution to have a very broad acceptance of VR and MR, but so I'm curious to know what you think, and as maybe a conversation starter as well, we can talk about how one week after the release of the Quest 3, we are seeing a lot of people trying Mixed Reality and saying that Mixed Reality is really amazing and fun, so I don't know if it's like a buzz and it will calm down after a few weeks. So, yeah, I said a lot of things, so we can dive deeper. Seb, if you want to start. Just wanted to add the fact that on the Quest 3 Lite, they don't plan to sell controllers with it at that price, so it will be only with end tracking that you can interact with the headset, so we'll see how it goes for them. Like you said, I'm not sure this is the right way to go. People already bought a lot of Oculus Quest 2, and if the quality remains around the same, I don't see a lot of people buying a new one. I don't see any reason. I think the Quest 2 is at 300 right now. So, yeah, it's a bit cheaper, but not that much, without controller, I think you lose a lot of VR game capability. The end tracking is nice, but for all the interaction, that doesn't make it a good interface for the user. So, yeah, I feel that it's not going to be a commercial success, and like you say, I think the issue here for the community, the fact that you have some nice contents to go to and interact with other people, yeah, that's what missing an interact also with different interfaces, with people on PC, Mac, on different headsets, no limitation on which headset you are using, so you build up a bigger community. That's my opinion. Cool, thanks. Yeah, so basically the same thing we are saying, we've been saying for quite some months now. Well, until we don't have the given application for mainstream users, those headsets are targeted towards VR specialists or experts right now, and if this is the case, this community won't buy cheap headsets. We've seen it with Immersed, I guess it was last week, because people are targeting quality and the experience. They don't want to discover what virtual reality is. I did a student presentation last week, and every one of them already tried VR, and they were well aware of what is a good experience and what is not, so I let them try older headsets, and they simply said, no, I can't use these kind of things, because the quality and the experience itself is not as high as they would like it to be, because they use PSVR2 or Quest 2, so they are now used to a higher level of quality for those who already know what VR is. So I'm guessing Meta is trying to do an Oculus Go 2, by making this a simpler, cheaper way of trying to make people discover what VR is, but I don't think this is the step we are on now. If you want to seduce or make your community grow bigger, you absolutely have to get those Keter apps or software, because I don't think games will be enough. You have to have everyday life applications. My usual example is that you have a banking app or insurance app, maybe for you not to go to the actual building or to meet someone, if you can do your actual everyday transaction or account opening, without getting outside of your home, I guess this is a big plus, or a governmental application as well. So I think this is the time, it has been for a long time, but professionals need to get their hands on these, and understand that VR is something that is going to happen, if you want it or not, because this is one step towards Metaverse, or the new iteration of the internet, or whatever you call it. So now it's the responsibility of professionals to make those apps, and I guess Meta or Apple should work hand-to-hand with those companies, or at least make them understand that there is a big market here, and this is always the same problem, because they are seeing that the community is not big enough, so they are just saying no, we want to invest right now, we are waiting for the community to grow bigger, and this is always the same issue here. We had this at the beginning of VR with games, because big editors or studios, they didn't want to do big VR games, because the community was not there. Well, that said, I guess we all agree on this, but if someone at Meta is listening to us, please do content and software, not cheaper, it won't solve your community issue, we believe. And the other thing I saw about Apple Vision Pro is also one issue we identified at some point, that the headset is too heavy for people to use, and they are trying to correct it as a major priority, so by saying that they want to do a cheaper version, I guess they are getting rid of the screen that is at the front, and maybe they will change the camera position, maybe closer to the eyes to avoid distortion, maybe, I don't know, but we discussed it. The first version was clearly a showcase, something to make people dream of what the future would be, and now they are coming back to reality with technical issues, and maybe less design and a better experience for the user. And when you said about the price tag, 1,500 and 2,500, well, it's the price of an iPhone right now, so I guess if people are buying iPhone at this price, they will be able to buy the headset as well. Yeah, yeah, yeah. I totally agree with all what you said, and I mean, to me, so I have the Quest 2 and the Quest 3, and back to the topic of quality, like the difference between the lenses, like it's, to me, the Quest 3, so the CPU, the lenses and the mixed reality is really what makes the difference, and if you go back to the Quest 2, as you were saying, like for your students, you say, like, hmm, I'm not sure. There's no coming back. I guess this is the Butterscotch, or I don't know, I can't remember the real name of those lenses, but yeah, you can go back to Fresnel lenses at this point. Yeah, yeah, yeah. Yeah, so, okay, cool. So, I think we all agree on that topic, so. So, yeah, bouncing on your topic, Fabien, I think I wanted to talk about mixed reality on the different headsets I have now tested and work with, which is the XRLX. So, this one, we did a project for Europa Park with this one in mixed reality. We have also now the Quest 3, as Fabien talked about, and the Lynx headsets. So, all of them are a great experience with mixed reality for different reasons, actually. The first one is the XRLX. In terms of quality of video, I think it provides the best quality without distortion. When you're moving an object in front of the camera, et cetera, there is not a lot of distortion, and the video quality even in dark area are quite good compared to the two other headsets. The Quest 3 in dark environment has a lot of grain in the video, and a lot of distortion when you move your hand or something, like Fabien said, in front of the camera. So, I confirm that it's really annoying, especially when you are moving around also in your space and you are getting closer to objects that start to be a bit weird. So, something to consider. However, in terms of lenses and quality of image, the Quest 3 is the best, clearly, in terms of what you can render, but also the size, the position of the lens, the fact that even if you are not perfectly aligned, I tried to change my IPD a bit out of mine, and it was still okay for me to look at the contents. About the links, and in terms of hand tracking, the links, it's the best, the best one. That's really where you can get all the movement of your finger and really have some really good interaction with hand tracking. However, the Quest 3 is really just afterwards, I would say. It has a great hand tracking, and also the field of view of the hand tracking, you can really put your hand close to your body and it starts to track already. You can see your finger moving. So, that's quite amazing. And also, there is some samples now available where you can have some hand subtraction. So, you see your hand in front of the 3D object, so you can feel like you really interact with it. It's not hidden behind the 3D, it's really on top of it. And for me, it's better than the Vario from the test I did at Laval Virtual. I really think that they really did a great work on the shader. So, it masks a bit the fact that it's not perfectly aligned on the border of your finger, but the way they did it, it's quite amazing. And they also have a sample where you can have your finger like this, and when you close it, you see the avatar 3D model that comes in and replace your hand. And since the transition is quite impressive. I think the Lynx has a really good hand tracking, but when you see the bones on top of it, it feels a bit not aligned perfectly. So, I would have to do some tests with trying to mask the hand, but I think it needs some computer vision to achieve a good result. In terms of adjustment of the IPD, the Quest 3 is the best one with the controls underneath that you have. The HTC Vive is okay, but the button is not that easy to use for a user. And the Lynx is the worst because of the kind of lens they have. You can see them here. It's really important to be in front of the lenses, and the way you do that is to pinch here the lenses, like you see here. And so, for a standard user, it's quite hard to do, especially if you wear the mask that allows you to avoid having lights coming in, which for me is really important. As soon as you are in a natural environment, you get a lot, like you can see here, a lot of reflection coming from the side when you wear the headset, and that makes it even more important to have the lens really in front of your eyes, and pinching that is not that easy to do when you wear it. So, yeah, that's overall my feedback on the headset. So, overall, the SGS device is quite good too, but it's more expensive now than the Quest 3, so overall, my choice would be to use the Quest 3 for mixed reality experiences because also the processor is the best one on this one, so you really get a huge improvement on what you can render in the headset for the virtual, the 3D model, and yeah, it's really impressive. I don't know if you have any feedback on that, or if you want to talk about it. Yeah, so I tried as well the hand tracking on the Quest 3, and the game that is doing the onboarding, I forgot the name of the game, but it's really well done. You can really interact. Yes, thanks. So, it really showcased the capabilities of the Quest 3 with the hand tracking, and it feels really natural to push on the virtual button. Of course, there is no force feedback, but it still feels quite natural, and the scenario is quite nice as well, and the transition between VR and MR, so between virtual world and mixed reality, is quite well done as well. It's quite smooth. So, yeah, that's my two cents on the hand tracking on the Quest 3, which I think is really good. Well, on my part, I don't know if it's a good thing or not, but well, that's meaning that one week after the release, they are making some huge, we can say huge software updates with those features for the hand tracking. So, maybe those were not ready at launch, or maybe they have a strategy of getting us new tools as the month passes, which would be a great thing because we could live the evolution of the headset through its life, and it could be very, very nice to have those huge updates about tracking or maybe distortion, meaning that they are still working on the headset. They just don't release the headset, and that's all, and as these are not bugs, those are features. This is very encouraging for the future. So, we talked a lot about, especially the links, so I guess, Seb, this is more something that is quite disappointing now, and you answered my question, meaning that which headset you will go to if you want to showcase or do some mixed reality project. I guess this is the Quest 3 much reflection because of the price, power, and overall quality of the experience. So, yeah, I don't know if you communicate with the Links R1 team, but this is what we were afraid of, meaning that they missed a huge opportunity to be able to sell their headsets as a large scale because for sure you're not the only one that is doing those kind of side-by-side review unless they can improve their headsets very quickly to match the competition. Unfortunately, I don't see how they can make their place on the market because the meta is so huge that it's hard to compete against, so, yeah, this is just my point. Yeah, I was waiting this review to know what headset I'll have to buy, but yeah, I guess my mind is made up now. In terms of answer, I answered for the event or for theme park application, for example, or museum application, I think the Quest 3 will be better for that. But if you want to use it as a professional tool and want to use the camera and have complete access to everything on the headset without revealing anything to beta, I think the Links is still the best choice. This is an Android system, so I guess you don't have all these accounts, meta account and whatever to be able to deal with this. Plus, you have complete free access on all the cameras and all the cameras even for the end tracking, so you can do whatever you want with that to track better an object or track an image or stuff like that, which is not directly available on the Quest 3 or on the HTC Vive XR Elite. Those access to cameras are restricted. Okay, so maybe in R&D or research projects, you could use the Links R1 more easily because I guess you said that you can eventually change some components of the headsets as well, if you want. Exactly, yeah. So, it's more like open source, a very lower layer of... You can touch the hardware, you can touch everything on the software, so it's a very open system. Our main business here now is for professionals to use those maybe at the middle or large scale and the price tag and the capability of delivering lots of headsets is, of course, on the meta side because Links, they don't have this chain of supply, this chop line chain ready yet. Not yet. Also, one thing I wanted to mention is the balance of the Links is quite good. The position of the camera is in, like we are talking about since a couple of weeks, it's really centered, so you have no distortion. So, even if the screen has lower quality, I feel more comfortable walking around, even outside with this headset than the other two because of that. When really I managed to get the right adjustment for calibration and when I wear those side mask, it really is comfortable to walk around and you feel like you are looking with a low quality the environment but with your own eye. One thing I saw, there are some MetaQuest 3 teardowns right now that are being posted and they are showing that the main volume of the headset is the battery. So, my question is, why didn't they do like, I guess, the Quest Pro at the battery at the back of the head? That's right. Why didn't they do that? Because they would have like a 30, I guess it is 30% less volume of the headsets which would make it very, very small and put the battery at the back and by the same way, they would balance it better and it would make the experience more comfortable, I guess. So, I don't know why they didn't do that. It kind of feels like natural to have the battery at the back now. I saw a post about how you can do that using already the way they did the strap on the back. So, you can already put an external battery on the back of the headset to prolong the battery life of the headset which is very low, one hour and a half in mixed reality. It's slower than the Quest, you mean? Well, it was not doing a mixed reality, but yeah. Yeah, one hour and a half. Even if you extend the battery, I think you go up to two and a half hours which is still a bit low, I think. Then you can swap it and change the battery on the back. It's still consuming a lot. The battery. And it's long to charge. I think it's two hours to charge completely. Okay. Something to consider too, yeah. And if that's it on that subject, I wanted to talk about an update that we saw on LinkedIn this week, about a new paper showing 4D Gaussian splatting. So, having the ability to film a scene with several cameras and then generate a 3D model of the scene. So, as you can see here, it starts to be impressive. So, you can see how they shot the video and what it does when they generate the 3D model and when you're moving around. So, yeah, a lot of improvement that happens very soon in that area. Any thoughts on that, guys? It's nice to see a competition to the Apple special videos. So, yeah. And as you were saying, the progress and the speed of releases and updates on this is really, really impressive. Yeah, I don't have much to say because this is my next topic as well. Yeah, I was going to say, Guillaume, I guess you have some research and then some search on that side. So, yeah, if you want to switch to your subject, go ahead. Okay. So, just let me do here. So, for those who are if we weak listeners to our podcast, you already know that I've done some experiments with the Gaussian splatting. So, I had 360 captures and classic videos and classic pictures as well. So, the first thing I did since my first experiments is simply update my hardware because I was really completely fed up of waiting one day, two days to have my data set ready. So, I just bought a RTX 3090, which has 24 gigs of RAM, which as they mentioned now in the documentation of the Gaussian splatting is the minimum for you to have a great experience with the algorithm. So, after trying this, this is a complete no-brainer. If you want to do Gaussian splatting, please buy a 24 gigs card because instead of one or two days, it's like 30 minutes. So, you can do a lot more experimentation and use a lot of different data sets as well. So, this is the first thing I would like to share. So, if you remember, I did the Gaussian splatting with the 3D viewer and now I'm using the Unity viewer because it's way easier to use with the mouse movement, the native mouse movement in the interface because you don't have to press play to see the Gaussian splatting results. It's inside the editor. So, it's very, very nice. You have this cool interface here, very user-friendly for you to change different aspects of your Gaussian splatting which is easier than the native viewer with the Gaussian splatting algorithm. So, if you want to try or use Gaussian splatting, please install Unity as well and this app, this project, which is Unity Gaussian splatting, very easy to find on YouTube and the Git repo as well. So, as I have my new video cards, now I can pass, instead of having 200 to 300 photos, now I can go up like 800 to 1500 photos which is, of course, way better and I can get the result in less than one hour which is very easy to use and it's a better experience. So, this was my first take with 360. It was a 360 video that I cut every two frames and then divide the equirectangular result into eight different pictures and pass it to the Gaussian splatting algorithm. So, the result is not perfect because first, my 360 camera is not that high resolution but as it was my first test, I didn't go everywhere. I just made a very small pass. It works through my backyard but you can see some objects are better rendered than others. So, about the 360 capture, I told you that you have to do this modification of your input data meaning that the 360 equirectangular needs to be divided into a simpler picture and then put as a training data set to the algorithm. I tried to take several 360 pictures inside a room. Unfortunately, this room was very white and the number of points of interest with this method was not working at all. The Pound Cloud was not making any sense and I found out that if you are using Agisoft Metashape, you can use the equirectangular pictures as an input for the positioning of the camera and the interest point calculation. With this algorithm, my input data, even with the white room, it was working very, very well. All my capture points were found. The Pound Cloud was clean. However, you can't use classic PLY point clouds with the Gaussian splatching. It needs to be done right now with Colmap especially to have those correspondence with the pictures and the Pound Cloud. I guess Metashape will probably have some updates for them to support Gaussian splatting like other software providers like Polycam and so on. They are all jumping on the boat of Gaussian splatting. We can see some improvement also in the upcoming weeks when 360 equirectangular picture could be used as an input for the Gaussian splatting algorithm. I know that some people are using the Nerf Studio algorithm to do so, to explode the 360 rendering, but I didn't try this yet. Another example I did, as now I have more power, is an old project I have done for photogrammetry. You can see that even with a very simple... It was a simple video taken with a phone, not an iPhone, just a simple Android phone. I moved around a bit at this point to make some tests. You can see that the result is pretty good for the area covered with the video. I guess this dataset is 600 photos extracted from the video. You can see that the results are very, very nice. On this part, my next experimentation is about how you can get some feeling of collision or 3D representation of this Gaussian splatting point cloud. I'm now trying to extract the native point cloud and get it meshed so that you can overlay the Gaussian splatting and the mesh so that you have some kind of collusion or other tools that you can use with these results. I don't know what you're thinking about, guys, but this is where I'm at now. Quite nice. How much does it run? What is the frame rate when you play the scene? I don't know where the statistics are. I'm not playing, so it would be on the thingy here. I don't know, to be honest. It's very, very fluid. I don't have any problem with this. But once again, this is a big machine I have now. It's not that big, to be honest, because I am a laptop guy, so I didn't have any PC tower or whatever. I sourced every part on Marketplace to get it the cheaper configuration I could get. With all these used parts, I can have a very powerful Gaussian splatting setup for around 1,400, so it would be around 1,000 euros for you guys, for Yousef, and I don't know in yen for Fabien, but you'll make the calculation. It's not that expensive for a very power-capable workstation. Of course, it's not the last generation of those components, but it's doing pretty good for what we are seeking. So that's it for me. One last point is that it's still a bit hard to know how we've seen the results of guys doing the 360 RRS1 with the Insta cameras and making those very, very beautiful results with Gaussian splatting, and I still don't know how they are doing that at this point. It's very strange. I don't know if they have much more computation power or if they have some specific algorithm that they are using to extract the picture from the 360 input data, equirectangular data. So I'm still trying to figure out how to do a very nice Gaussian splatting result with 360 cameras, because to my experience, this is the best way for you to get quick results, quick capture. You don't have to turn around and take a lot of different pictures, because as you can see here, if you are using more classic methods like photogrammetry or when you are taking pictures or you are making a movie of your scene, every single... once you are not in the range that you've captured, the result is not that good. So this could be corrected by using 360 input data, because you would be capturing the whole scene at every frame, and it would be much easier and efficient to my experience. So I'm still trying to do something with the 360 capture, because I think this is the key to have something very efficient and very powerful with the Gaussian splatting. You may need to level up your 360 camera. Yeah. Christmas is not that far. I could get some Insta cameras instead of my old gear 360, but yeah, the proof of concept has been made, I guess, at this point, so we'll see what I can do with maybe a better 360 camera at some point. I can make a record for you. Oh, you have an Insta360? So yeah, please do. Okay. Well, stay tuned for... the next episode of the Gaussian splatting. A nice park in Tokyo at 4. Yeah, it starts to be a little bit short. I could do some Canadian, the magnificent colors of autumn as well, but yeah, it's a bit late already. The red colors are gone. It's now more brownish, but yeah. Can still do some outside take. We'll do a little competition to know which capture is the best. So that's it for me about this experience. So once again, Gaussian splatting, as a lot of people are saying, is probably a very... It's a game changer to the 3D scan slash volumetric field. As a lot of people are doing lots of researches, we've seen what you've presented, Seb, with the video feeds, which opens up new possibility as well. So very, very interesting to see that the whole community is behind this, and they are pushing hard to make it something better, faster. But yeah. So I don't know if you have anything more to add to these very intense episodes of Lost in Immersion. No, all good. Okay, so this is a wrap for today, and we'll see you guys next week for maybe new experiments on Gaussian splatting and new headsets and new...

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}