Welcome to episode 26 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hi guys, and I guess Fabien will start, as usual. Hello, Fabien. So the topic for today is about VR and the choice between using controllers, like the many VR headsets are doing, or just using hand tracking. I'm reacting to an update from Pico, one of the major headset manufacturers. Pico said last week that they are also focusing heavily on hand tracking for the input in their headsets. And of course, Apple, the Vision Pro, doesn't have controllers, and so we know that Meta is also very focused on hand tracking. I think there was an update a few weeks back on the hand tracking in the Quest, and so obviously, hopefully, the Meta Quest 3 will have good hand tracking as well. So it's interesting to see with the improvement in hand and finger tracking, how this feature becomes more and more popular and more and more used on all headsets. And on the other hand, pun intended, when using a controller like this, there is feedback, force feedback, when you are doing something, when you press a button, there is vibration in the controller as well, and they are more accurate. So it's kind of a balance between having a bit more feedback and accuracy, and maybe a bit more freedom, and of course, for the production of the headset, it's easier for the company, they don't have to produce controllers. So yeah, my prompt for you guys is, how do you see this tendency between controllers and hand tracking, and which one do you like, and which one do you see more meaningful for our usage? And we can start with you, Seb, maybe. Yes. So I think right now, I tried different headsets, and the hand tracking was really great with the Ultralip SDK on the Lynx or on the Vario. The quality of the hand tracking is really good, you can really interact with it, and it's fluid, you don't have missed recognition of your gesture. But you still have to learn those gestures, and so it's not that natural at first to interact that way, you need to learn how to do the gesture so it works perfectly. I have the Quest Pro, and for me, it's unusable. I'm always switching back to the controllers when I try to use the finger, trying to go inside the menu, select stuff. I think it's painful to have some missed recognition, so you try again, and when you do the gesture, sometimes where you're pointing is switching a bit, so you are launching something you don't want to launch, and that never happens with the controllers. For me, both as their use case, and right now, the hand tracking, like on the Quest Pro, which are done with cameras that are not meant only for that, but meant for also tracking the environment and stuff like that, I think it's not usable. It needs to be perfect, or you always switch back to the controllers. At least, that's my experience. And like you said, the fact that in the game, you have some false feedback with the controllers, it's really improving the experience, and when you go back to using only your hand, it seems like you are not inside the experience anymore. At least, it removes one layer of immersive feedback that you get with the controller. So, unless we are moving also to something like rings that you put on your finger and get some false feedback on your finger, or something that you put on your wrist that simulates, because you don't have any muscle on your hand, everything is on your forearm. So, if you are able to manage a sensation on your forearm, that could be nice also. But until now, I'm not a big fan of only using hand tracking, even with the HoloLens 2, it was a pain to interact only with the finger. I think also, one of the key points is that it's not using the Ultralip like the Lynx. So, I'm really wondering when we would get our hands on the Lynx headset, how I will behave and how I will maybe revise my opinion on that. Yeah, just a word on that, because I tried the Ultralip controllers a few weeks back. And so, there is a game where you draw with your hands and you create boxes and you can play with the boxes that you create. And it was really accurate. The gestures were really, really good. But yeah, I agree with you that the force feedback is very important. What about you, Guillaume? Well, until now, I'm a controller guy, because as Seb mentioned, the previous experience with HoloLens 1, for example, and 2, when you had to select and so on. And we saw that they released the clicker just afterwards. It's just a simple button for people to select. And you had the head tracking for selecting the objects. And I've been an Ultralip-slash-LipMotion user for decades, maybe. And despite the fact that it's a very fun device, I never found the right way of using hand tracking as a main controller or main interaction with the virtual world. You still have these glitches and this little delay. But if you use the latest version of it, because they released a new one lately. Yeah. Okay. So I didn't try this. So maybe I'll change my mind in the near future. But until this day, like Seb, I never found anything that would make it usable as a full standalone controller. I don't know if this is the same trend that we had a few years back when people tried everything to do the minority report-like interface. Because at some point we had scientific papers that proved that we are not meant to do this kind of interaction because it's too tiring for the arm. Humans basically are not constructed for doing this kind of interaction because it's too painful. It's a very short time. It's like less than five minutes. Past five minutes, you're too tired to use this kind of interaction. So I think that I will find that because when they are presenting their hand tracking, people are keeping their hand along their body and just clicking. And so your hand is resting at some point. But interacting in VR games, like typing on the keyboard and interacting with buttons and so on, at some point would be exhausting, I guess. So not a very big fan of tracking without any controller. You know that because I'm always talking about Apple and when they would be releasing their controller. Because to my point, I think they will have to do that at some point because people won't be liking this on a long period of time. So we'll see. But for me, the force feedback is not even on the debate because we are still not mastering the hand tracking itself. So when it will be done, we'll have another problem with the optics because it will mean that we'll have gloves at some point. And those technologies are not ready yet, despite the fact that we are working on this for the past decades as well. So the main goal is to master this. And I'm very curious to see what will be done in the next months. Something I found interesting that on the eye tracking, for example, there is Tobii, which has done everything. And now they are the leader in that market. And now on the hand tracking, I think Ultraleap is leading also the whole market around the tracking of the hand. So it's only one company managed to get it right every time for this kind of tech. And investing as much money as they can to get the perfect device to track the behavior of the eyes or the hand. It has been a 10 year process to get the second iteration of the Ultraleap. I think they had to sort of find where they wanted to be because they made a device, but that's not their main focus anymore. Their main focus is to embed their technology inside the headset directly to avoid having like what we did for a long time with the HTC Vive Pro, for example, where we were putting the device in front of the headset with a cable connected to the computer or to the headset itself. They tried to embed the technology so it looks better for the user. They have been trying to embed the technology since the beginning. I don't know if you remember the laptops. I guess it was Asus laptops with the Leap Motion embedded next to the keyboard for you to just point with your hand and do whatever. And of course, it didn't work because people don't want to draw in the air, especially on a laptop. So they tried and tried again. And then, yeah, I guess they are finally succeeding in this quest by embedding it in the Lynx R1, for example. I don't know if they are doing in other headsets as well. Yeah, in the VARU as well, of course. I think they will be fighting also against people that are used to now typing on their phone or on their keyboard or using a mouse. Going away from that easy interaction and precise interaction means that it needs to work perfectly. Yeah, I think I was doing some research at the same time. There was an article a few days ago of someone who actually tried the Vision Pro, and he's talking about the keyboard. And so he says like, so I'm quoting, he says, it works fine, but takes some getting used to. So latency. No, gesture. I think it's more gesture to learn. As on the phone, when you start to use a new phone, if it's an Apple and you are used to Android, the manipulation are not the same, the way to go to the setting, the way to interact with the phone is not the same. So you have to, yeah, there's a bit of learning. Okay. So anything more to add Fabien? Are you okay with our responses? Perfect. Thanks. So Seb, next topic, I guess. Sure. So on my side, I have two topics. The first one is about clothing and the rendering of fabrics on virtual reality, which was kind of tough until now. It seems like it's moving forward. And I have this video to share where we can see someone using, manipulating fabrics with controllers, not with hands, and being able to fold it or to put it on the table and having to react correctly. Or at least almost correctly. It's really something really tough to do. And the level of quality that they are here seems very nice. Do you know what kind of technology are they using behind? Is there an AI model behind or is it completely real time? I need to go through the details to learn more, but the fact that it's running in real time, it's kind of amazing for me. There is definitely physics applied. If there is an AI model, I need to check on the paper to see what they used. I don't know. Any thoughts around that? Well, as always with this kind of video, the first seconds slash minutes you are seeing it, you found it very amazing. But as you are looking at it with more attention, you can see that this is still more about some liquidy physics than real clothes. Or you can see that it's like moving, not in a natural way or maybe like satin, very smooth and very slidy piece of clothes. But I can't see any realistic way of simulating that. Yeah. Because we all know that this kind of simulation, it takes like specific software that are very, very expensive because they are specified for that. And they are used by a fashion designer to know how their clothes would be moving around. Stretching along the body and moving when people are walking as well to exactly predict how it will go. So we are not there yet, I guess. It's a great improvement, but still, yeah. For video games purposes, I can see the way for this, but for professional, maybe that will be trying to test their creation in VR. I guess we are not still there yet, but yeah. Step-by-step we are coming closer. I think for training, where there is some manipulation that involve working with something that is not like a piece of cloth or fabrics, that could be nice. But like you said, for designer, to make sure that the way the clothes they design will look great on a specific person, we are not there yet, no. Or even for try-ons, we know that AR is targeted for fashion try-ons and that there is always this feedback from the clients or customers that it's not a real deal. So they will have to try it in the real world to see how it feels. And yeah, that's my point of view on this. Yeah, I think it can be nice also to, for currently there are a lot of virtual fashion. You can see it in Snapchat, in Roblox, maybe on the Decentraland as well. So a lot of brands are participating into that. Web3, Metaverse, fashion world. I think we talked about the fashion week in the Metaverse a few months back. So hopefully this kind of technology can improve and broaden the variety of clothes that can be used and purchased because there are business models behind this kind of Metaverse. So yeah, it's nice. All right. And so the second subject is a new way to generate with a picture, a 3D model of a character. And it's not moving forward. Oh, sorry about that. So we see here that through iteration, it starts to generate a 3D model of the head of the user close to the picture you provide. And I don't know how fast they accelerate the video to get the results, but anyway, even if it takes one day to generate this kind of 3D model only with one picture, that's still impressive. The first time I saw that, MetaHuman was impressive in terms of quality of asset that you get, but here from one picture, being able to get that kind of result for me is a new game changer in the domain. So I don't know what is your thought about that. Yeah. So do you know if it's an actual 3D model or if it's like, yeah, or just images, just quote unquote, a sequence of images around? They're talking about a 3D model. Yeah, I can see it. Okay. Because, yeah, with the popularity of avatar services like 3D Player Me or the Meta Avatar SDK, being able to generate your avatar from just a couple of pictures can be really nice. So, yeah, that's the first user I see. What about you, Guillaume? Yeah, well, we all know that Apple, once again, with their Apple Vision Pro, they are trying to recreate this kind of 3D avatar of your, at least of your face for you to be able to do Teams or Zoom meetings with your headset on and so that people that are talking to you can see your face moving around. And they found some technologies for you to 3D scan your face through the headset. And from the first feedback we got, this is not as accurate as it should be. It's more in like uncanny rendering and people are not feeling that good with this. And when you are seeing this kind of technology, of course, we can predict that it will maybe be bought by Apple in a few months or so because it would be the right answer for them. As the king of user experience and user interfaces, of course, it's better for you to just provide one picture and get your 3D avatar instead of like scanning your face. I don't see people doing that, to be honest, because for having done some 3D scans, it will always work when you are doing some demo. But when you are at home, you'll never get the result that you want. So this is the first one. And then maybe to open up the debate, I know that a lot of manufacturers are obsessed for you to have the most realistic avatar of yourself possible in VR or metaverses. But when you are asking the real user, when they are willing to do this, they are just not. This is the complete opposite. They don't want to have their face because people mainly don't like their faces. And they just want to have some enhanced or improved version of themselves or even completely different, like male or female or whatever, being a dog or their favorite anime character. So I guess this obsession with you getting yourself like a digital twin is not completely aligned with what people are willing to do in a virtual world. But I can see that we know that there is a way of sorts of the metaverse, when you have a digital twin of the Earth and a digital twin of yourself through digital ID. That way, I would understand. But at some point, you will have to understand that people are not willing to be themselves in a virtual world because it would be too restricted or limiting, I guess. So just a little philosophical or ethical debate on this. Yeah, I guess. I mean, in the work context, I can understand that people who want to or people or companies who want to have the real face and the accuracy of the face. But yeah, on the more game and social experiences, I mean, we can just by just spending five minutes in VRChat, you can see the variety of the avatars that people like to wear. It can range from Star Wars characters to animals. And a lot of articles and a lot of people think that this is a way for people to express themselves that they want to be. So, yeah, I don't know if I answered your question, but basically I agree with you. Maybe except of the work in the work context. Yeah, if you want to get employed or you're passing. Yeah, of course. But I also always think that this vision of getting like ultra realistic twin, a digital twin of everything, because there's a way I heard a few conferences where people are saying that if you want people to get immersed, you have to have something that is the most realistic that you can be born. It should be exactly like the Earth. At some point, people are saying that metaverses are not meant to be if it's not like matrix. We as VR user, we know that it's a restricted vision, because as we are in a numerical virtual world, we can do whatever we want. So why trying to do exactly the same as what we are living right now? We could go beyond that and create new universes. And we know that the brain, it doesn't need this realistic representation to be immersed. You can be immersed with like wireframe or just colors and whatever, and you make the switch. So I'm kind of battling this way of thinking because one, it will slow down the creation of this metaverse or creation of very interesting experience. And of course, we can do better than that. It's just like recreating the Earth at some point. It could be one way of doing this, but it shouldn't be our only focus for getting to global immersion that represent the metaverse. For me, it's to use case the try-on clothes on yourself to make sure it fits you or you like the clothes or the outfit that you try on. And for the work environment, like Fabien said, for training, if you want to have a specific person explaining his work and explain how we do something, just shooting in and having him getting his body movement inside the experiment and having him replay with his own 3D model. I think that that makes sense. And that makes the work easier for the enterprise to demo the real work environment, the real manipulation with a real person doing it, explaining it correctly and doing the gesture perfectly. But like you said, for everything that is casual or where people want to enjoy their time and be immersed in the other environment, everything is to be different from the reality too. At least that's my thoughts too. Yeah, I agree. I think most VR users, they don't want 100% realism. They want to be able to go anywhere in the virtual space. They want to have fun. They want to have an experience that is easy to use. And then, yeah, the quality of the rendering comes. I mean, we can just look at the success of Roblox. Or Minecraft. Minecraft, exactly, yeah. Oh, super cut. Okay, so maybe I'll move on to the next and final topics. So just let me share my stuff here. Okay, so for my part, I would like to discuss the Lynx R1 latest news. They were announced two weeks ago, but yeah, we are finally getting to the date that they are announcing. Well, to put a little context, since our various or numerous discussion about Lynx R1, I'm seriously thinking about purchasing one of them for my professional activities. Because I would like to have something that looks like or maybe show the kind of effect that Apple Vision Pro could provide by advance and try to work on the augmented reality or spatial computing, as they are calling it, for the next month. We know that Seb really liked it when he tried it. It was, I think, at Laval Virtual or before that. And since then, I'm just following what is going on with this headset. So just as a reminder, it did a successful... I don't think it was Kickstarter. It was maybe another one. Their campaign was successful. It was in 2020, 2021. And since then, they are just like delaying once... Every time they are delaying their release. And right now, like two weeks ago, they announced that they will be going to production. Like this week or next week. And that you should expect your headsets by September. And new buyers will get them by October. However, when you are looking at the specification of the headsets, and the future release of the Quest 3, you can ask yourself, is it still interesting to get this at this point? Because the resolution is not that good right now. Because, well, it improves through time. And something that is kind of scary for me, is that two weeks ago, they announced that they received the headsets, and they had to do manual lens calibration for every headset. And that they found a new algorithm for them to be better. Like this is a picture that I'm showing you right now. That on the previous version, there were like some blurry lines and bendy lines. And now they have a better calibration process. And that is why they are delaying once again for you to get the best experience possible. So it's kind of scary for me that at this point, they are still working on improving some calibration process. That they are still doing it manually in France, while the headset is created in Asia. So it means that on the large scale productions, they will be completely overthrown. I don't know really the number of headsets that they sold. It's like 2,000, I guess. So it means that at this point, they are receiving 2,000 headsets in France and working on them for them to be calibrated. So as a company that could have been like a competitor at some point, or maybe just to fill the void between the announcement of the Apple Vision Pro and when it would be released, I think they are missing their opportunity window at this point. And if they are getting more delays, MetaQuest 3 would get the whole market because their headset is still around $1,000 for me, as the MetaQuest Pro would be half the price. So despite the fact that it's a very interesting device, I hope they are not getting in the... We all know these manufacturers that want to release the perfect product and they are still delaying, delaying, delaying. At some point, they just release the product and it's too late because, well, people are fed up of waiting. And when you get the device in hand, well, it's not as good as it should be or as the competitors are doing. So, yeah, I'm still waiting and not really knowing if I'll order this because we are in August and I can't see how I could get my headset in October as they are still not releasing, well, sending the backers devices at this point. So I don't know what you think about this. Do you have more information maybe? And yeah, I want to have your view on that. I guess that's what you said, you know it. Yeah, so we are doing a mixed reality experience for a theme park in Europe right now. And we are also wondering right now, we choose the XR Elite, the HTC XR Elite to at least start developing the experience because it's for Halloween, so end of September. And I would really like to have the Link headset instead of the XR Elite because there is like some deformation around the hand of the user because the XR Elite doesn't have two color camera to do the stereoscopic capture of the environment. And we have the same issue as you described here. They are saying now that we should receive one by end of August. So it will leave basically us one week to decide if we switch everything to Links or if we stay on the HTC Vive XR Elite. And right now they are saying that for the end of September they should be able to provide 10 device. So it seems like they've moved forward. But like you said, they are still working. Now, at least the design of the headset should be a final one. At Laval, it was still a prototype. And like you said, that's very scary that they are still trying to improve everything by calibrating everything first in France. And I don't know if they foresee a real production this year of the whole set of headset or if they're still planning to maybe deliver more on January next year. But as you said, I'm also worried about the Quest 3 coming to the market and having a better quality of experience available with that headset with a similar pricing. That can be painful for the Links to find their marketplace. The only reason that you should then try to get a Links is more that it's open and you don't have this restriction from Meta around the usage of the device or accessing the camera or stuff like that. So... But one point that you're mentioning, you seem to know that the Meta Quest 3 will be better in terms of quality? I don't know. I'm searching for that. There's no documentation. I don't know. We don't know anything about the Meta Quest 3 at this point. And one thing that is waiting for the Links R1 is the camera positioning. Because we know that for the Links R1, the cameras are just in front of your eyes. So we know by design that there will be less distortion with the Links R1 compared to the Quest 3 because the cameras are just lower. So this is one other argument for getting the Links R1. But on the other side, you have the quality and the price provided by Meta. You just have to wait for your options now. But for a professional, those arguments are important as well to have the support and the price as well and be able to buy 10 of these and get this the next month without any stress or being scared of not getting the device. And so longevity as well. Is the Links R1 still there next year? As a French startup, we really don't know what will be done. I tried to monitor the activity of Links R1 and we know that they are not posting as much as they did in the past months or years. So this is not really a good sign either for backers or potential customers. I'm a bit scared because there is nothing worse than getting your company buy an headset and then just announce that it has been cancelled or you lose the money because the company is closing. They posted an article two weeks ago on the Links website and the title is Is It Too Late? It's the title of the article. I saw several other blogs and websites posting this and all their title is Is It Too Late? Of course, on the official website, they are defending that and promoting that. They said, I quote, the main challenge behind delays has been getting sufficient funds to ensure manufacturing. There is a graphic comparing their investments. So it's about 2 million USD per year on Links and they compare it to Meta which is like 10 billion. So that gives a perspective on, as you were saying, the challenges that a startup can face when producing something approaching hardware. Putting the financial argument on the table is not reassuring as well because they are talking about that they don't have enough money for them to get into production. It's concerning at this point. I think the fight is in China, in the manufacturer side of all the different components where a Meta company can come and say, okay, stop everything you do for this company. I buy you the same price as what they pay you and I triple it and you will work for me to provide the headset now. I think that's what they encounter during the production. And that's why they had to send someone over there to work in China directly on the production side to make sure they are not shifting to other headsets. Yes, simply because Apple just bought their lenses company. I guess this is a good argument on why it's slowing down. I don't know if Apple is really blocking them from getting the lenses. I don't think this kind of player is a threat for them. We don't know what they are doing in the back. We won't have the answers right now. I just wanted to know what were your state of mind right now. Especially if you have this kind of project by September. I understand the feeling and have the same question as you are right now. In two weeks we should have one and we should be able to give you some other feedback. Yes, because the end of August is like right now. So if you have any answers by the end of the week, I guess your choice will be made.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}