Welcome to episode 23 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go, and welcome back, Seb, from your two weeks holiday. I hope you get some very interesting news from your time off. Who wants to start? I can go. Today, I want to talk about a company named Stroll. The company is developing tools and digital therapeutics, as they say, software, to help people who have neurological disorders. For example, Parkinson's, this type of disorder. What they realize is that by actually placing some virtual objects on the ground and around the user, it helps the brain of these users. It helps the brain have a better understanding of the surroundings, and it helps them walk faster, walk easier. It actually works in real life, like by placing real objects on the ground. But they realize that it works if it's virtual objects as well. You can see this kind of virtual elements that are placed on the ground, and the user has to step on these virtual elements. It helps them actually walk easier. We can have a look at a few demonstrations here. You can see it looks like magic between the before and after. It's a very interesting use of virtual assets like this. Very encouraging. I wonder if when smaller devices will come to be available, it will help even more, because it's a less heavy load on the head. I was very impressed by this, and I'm curious to hear what you think. Seb, maybe? First, it seems to showcase two devices, like the HoloLens 2 and the Magic Flip 2. I wonder if it's just virtual elements that the user has to step on that helps the brain to reconstruct the way to walk. Do you know what triggers the change in the mind? Yes, it's the visual elements. There is a video somewhere where, without using a headset, they just lay out colored lines on the ground, like real ones. It helps the people walk easier just by doing that. I was a bit skeptic at the beginning, but it seems to be an actual procedure for this type of disease. My question is, what is the added value to do it with a headset instead of laying out paper or stickers on the ground? More scenarios, more adjusted scenarios, more quickly for a specific disease? Yes, there is that. It's easier to deploy. It's more complex to create, but it's easier to deploy. You can walk into any building with the device, and the device can lay out lines in front of you. It seems like they also have gamification of therapeutics. You saw a video of someone boxing and trying to reach out to different objects as well. It seems like it helps for this. It's making it fun, so they like to do it. It's not an exercise anymore, it's just a game play. I saw the news, and that's impressive. The change, even without a headset afterwards. I think I saw they showcased that after the video. It seems to be amazing. I was a bit sceptic at the beginning, but it seems to work. What do you think, Guillaume? It's very interesting to see that sometimes the simplest idea is working very good, especially in the medical field. The other thing that I found funny is that we always think that this is a new generation that will embrace AR and all this new immersive technology. At some point, if this kind of application is working, maybe our senior generations will be the early adopters of AR headsets. It's really funny to see that way. I know that there were some medical researches, especially for people that have sight disabilities, to add some lines along the walls for them to see the different layout and for them to see the wall, for example. What is unfortunate is that usually this kind of application or company, they don't have a lot of finance and budget for this. As we can see, this is a very simple app, and I guess their main goal here is to find money for them to deploy this application. We know that in the medical field, it's always very hard for them to get something for them to deploy this and to sell this and make money and keep their company rolling. We basically know that AR here, there is something to do with this, but with the price of the glasses, the way to deploy it and all that stuff, I hope that this company will be able to deploy it to the users that could have good help from there. At this point, once again, until we have some affordable glasses, I think it would be hard for them to deploy this worldwide, as it should be for every senior that has those mobility limitations. I think the future of AR is coming with a new mixed reality headset that are not pass-through at the US, but more see-through, not see-through, but pass-through. Also, here, I think that the target is more the doctors that can buy a headset and use it for a different patient. The best-case scenario is a doctor that can use that headset as a really useful tool. If it works, that it is shown, we could think that it would be an everyday life accessory for them to be able to be more autonomous and regain these mobilities that they lost. This would be how I would like it to be used. I think it needs to be controlled by a medical team, too, because those kind of people can't do exercise all day long, and they already see a doctor that comes every day to clean their toilets. I guess it depends on the level of the illness, but if it can be used in the early stages, I guess maybe younger patients will be very happy to use this and to be more autonomous and maybe slow down the evolution of the disease. Then it's more the doctor that needs to prescribe and rent. Maybe the MetaQuest 3 would be a good, cheaper device for this kind of usage. I don't know if you have anything else to add, but that's it for this topic. No, still a very good use case, and it's very great to see that there are some very everyday life use cases that could be used with AR, I guess. Yeah. Can I pass around that? That's a good use case. I want to talk about my subject, which is a company called Medivis, which released a video of what they do with the HoloLens 2. They provide a kit to doctors so they can embed, get all the picture and scan that they did, the different picture of the scan of the brain. Instead of playing that in 3D, they're able to show that directly on the patients and also to remove a part of the brain and see only the part that they are testing in. That's the video they released. We know that augmented reality in this kind of environment is very useful, but seeing that ability of merging the different technology and displaying the scan in 3D like that makes, for me, a lot of sense. And being able also to get more info with tools that are also displaying augmented reality without having any controllers, and that with your hands. Yeah. It still makes a lot of sense. The way they set it up is this way. They sell a kit with the HoloLens 2 and with the screen to look at the view and mix the data. You can import your data directly on the PC that you are embedded in to discuss and bring that into the surgery room. So tell me what you think on that. Well, it looks indeed like a very packaged experience, which is, I mean, for me, it's the first time I see that. Usually it's more experiments, but this seems to be like a real commercial solution. I don't have a lot of knowledge into surgery, of course, but I wonder how is it used so they can have data to know how to proceed with the surgery. Yeah. Yes. Okay. And also instead of looking at everything in 2D, they put everything in a sort of point cloud with the ability to change the rendering so they can really pinpoint what they want to look at and see that in place. Hmm. Yeah. And so, yeah, that looks very, very nice, very useful. Well, how to say that. Well, we've seen this kind of app, as Fabien mentioned, as maybe more R&D approach. And as he said, it's maybe the first time that we can see this as a commercial or world package for medical staff to be sold to. However, as we've seen these rendering and these application for years and years and years, I'm starting to think that if it was very that useful, maybe it would have been pushed a bit more than it is now. For me, it is still, maybe I'm completely wrong about this, but is there a use case there or is just a good medical, technical demonstration? Because if there was like a need for a surgeon or medical staff to have these kind of tools, I guess they would have pushed it a bit more to be accessible in the medical units. So I'm still, as always, people, when you are talking about VRM, people are thinking about medical application and this is maybe the second use case people are thinking of. But maybe it's not the field that these technologies would be applied the best. So, yeah, I really don't know. Maybe we should ask a surgeon to see if those kind of interactions make sense for them. Or maybe the 2D or because when you are seeing scan or IRM or all those kinds of medical imagery, you already have this 3D aspect and this way of cutting the 3D data. So maybe using it on the classical screen is enough. But it should be because they can work right now with this. So the question is, is AR that much of an advance for them or it's more gadget or gimmicky or I don't know. I think until now, all the demos that I've seen, the quality of what was rendered was very low or had to be redone with a 3D tool to make it compatible with the capability of Android. And here it seems like they are maybe using Wi-Fi 6 and the computer that is already in the kiosk to directly stream the content because a point cloud like that on a mobile device, I think before it was not possible because you won't get that quality. And here you are really able to change the rendering, the level of details. So maybe it was not spread because it was not as developed as that. For me, the first time I see something that seems to be relevant in terms of quality. I agree on the quality, but yeah, the main idea of my reflection is that if it was something that was groundbreaking, maybe they would have do this a bit faster or make some budget around this. This is just my reflection on this and I'll try to get some intel about some search runs that could give us some information. This one is more a use case of a kind of mixed rate interaction, which is done a different way from what we have seen since a long time, where everything is done now with a mobile device. Here, everything is done with projection and tracking the user position and also the light is only with his head, with a live tracker. And depending on where the user is moving, all the content is being adjusted. So when you look at it, it seems to be really 3D around him. But also another user on the site could look at it and see the interaction. You would just miss the perfect element with the environment, but all people can look at it and see what the kid is doing. So I think it's very poetic and like I said, it's another way to do mixed reality that are really interesting. I saw this. Well, then, isn't there some AI behind this as well for the application to understand what people are doing? Maybe. So I may have missed that. So I thought, I think that I have seen these through an AI news. So maybe, maybe, maybe there's some, some interactions. Yeah. Well, it's always great to, to see some poetry or art in our technological field. And as you know, I'm a big fan of cave system. So it's some, it's really cave-like. So I'm really excited about this. It's, it's very cool to see, like, I don't know if you have seen this before, but this big sculpture that you have to, to be on one side and only on that side, you can see what it actually is. Otherwise it's, it looks completely distorted. It's, it's nice to see that they have used this kind of similar investments to build that. It's really cool. Not too much more to say, despite the fact that it's not accessible to, to everybody because you need this kind of structure to see it. So maybe it would be, it would be traveling around the world at some point for, for us to, to try this. It would be nice to do that in New Zealand. Someone doing the interaction, but other people who can, that can look at it and understand the same scenario context. It was also huge, but we missed a bit, some, some interactions. Okay. So I'll do my topic now. So it would be mainly, well, it would be about Meta because there are lots of things to say about, about the company right now. So the first thing I would like to bring on to you is that unfortunately Meta lost 3.5 billion in the second quarter within its reality lab division. As a reminder, it was a 4.3 billion on the fourth quarter of 2020. So, not, not a very good result for them. We can see that the fall is still ongoing at a lesser, well, it's less than last year, but it's still a big numbers for, for their virtual AR or augmented reality division. So it can probably justify those, the other announcement that they made, especially on the Horizon part, as they would like to, I feel it like a reboot or a reset of their whole initiative with the Horizon Metaverse, as they are now trying to create entertaining or game gamification like application for members and people to, to gather inside their immersive space. So basically what they would like to do is what other competitors on the Metaverse field are doing, meaning about like VR chat or it's a rec room or where those Metaverses universe are basically a huge playground where people are gathering to play. And have some fun and have some social activities as well. And to do so, they would like to develop their own games and work with creators and small companies that can be, that can be, that can bring those interesting games and apps. So what are, what are you thinking about this? It's really, I guess it's really hard for Meta to, to take this step back and like, just to, to save their financial state to, to do the same as the other competitors with especially some, some delay because the other companies have like two or three years advanced competition. Compared to what Meta is, is trying to do. Fabien? Yeah. Yeah. I totally agree with you. I saw as well, maybe you mentioned it, but that Horizon might come to mobile. Yeah. I didn't mention it, but it's a great aspect of this. Yeah. Similar to like Roblox. They're basically doing the strategy that there will be creating some mobile games and try to, to, to grab these community and put them in VR. We can see that we talked about the Roblox last week. They are really trying to, to, to buy or to grab those community and bring them to VR at some point. Yeah. And I hope as well that they will expand the reach of Horizon because currently it's available only in like seven countries or seven or eight countries. So they are, they are missing also a lot of users here. I'm maybe there is a good reasons behind that, but I'm really wondering why they are not opening up to everybody. Yeah. Not much to add to this topic. Yeah. I think we are trying to catch more users with the mobile app. Makes sense. They need content in game that interest the user. So I saw that on the same users, I think they need to find creator and good creator of content that can make experiments that are good on mobile, that brings people on the headset afterwards. The game can be even better on the headset. Yeah. Okay. Well, just, just to finish on this, what scares me the most is that Meta is not a game creators. We know that they are not good at this and for them to, to force this, I guess it's really not, not dangerous, but yeah, it's still a very risky strategic way of trying to save basically their Metaverse. So, and They can do that sponsoring some, some game developer. Yeah, but not in house. And yeah, but that's what they're trying to do in the house. So yes, this is a concerning part of that. And last news about this is that they are on the hardware part, they are presenting two new devices or prototypes. So I guess it's maybe to, to, to break the rumor that the Quest 2 was, yeah, the Quest 2 is stopped. But that the rumors was that there won't be any more Quest Pro 2. And, and we know that one of the representative did a speech last week to, to, to say that no, no, no, they won't be stopping the Quest Pro 2. And the main reason was that they had different kind of prototypes that they were working on and they didn't know which way they would be heading for, for the next iteration. So they are just confirming that by presenting those two prototypes. It will be presented at SIGGRAPH next week, I guess, or this week. My first thought is that it's very strange to see Meta at SIGGRAPH because usually they are not there. So it's kind of like they are presenting this to, to just to occupy the space and show that they are doing stuff. Probably to, to, to battle on the Apple field and to see that they are doing stuff also and maybe bring some new technologies. So just basically what they are presenting there is there, there is a varifocal retina-like resolution for them to present. For the resolution they are targeting is nearly double the resolution of the barrio. So it's very, very interesting in that way. And the varifocal that we've seen for quite some years now, they are working on this motorized screen for you to be able to, to focus on different distance and especially to read, to read some text in VR. The other prototype for, I think is very interesting because it's something that we've never seen until now. So just let me show you this. So it's a, like a fly, fly representation and the main idea there is for them to correct, because you know, we've already discussed about this, is that the main problem with the video pass-through is that the camera capturing the real world is not aligned with the user's sight. What basically they are doing is correct this through algorithm. And of course, it creates a distortion and you don't have the scale one effect that we've discussed already, especially the comparison between the Metro Quest Pro and the Lynx R1 for example. And what they are trying to create there is that they are, they have a whole network of lenses that are capturing the world through different angles and it basically choose what lens array the user is watching. So it's coupled with eye tracking, of course, and the effect is really interesting. So you can see there how this whole network of lenses is working for them to select the right lens array. And at the end, they are showing the kind of, the kind of display we can see. So this is LightFeed pass-through as they are calling it. So we, if this is the real footage, we can see that the colors and the distortion is basically not there anymore. So I'm very enthusiast by the idea that we can still create a new way of doing video pass-through, because I saw, we saw that the mainly manufacturers were using the same technology. They were working especially on the lenses that are close to the user with the Pancake X1 and all those evolution. And it's very interesting to see that they are working on the other way, meaning that they are working on the capturing cameras as well. And this kind of design and setup is the first time I've seen this. So it's very, very interesting on the R&D side. So what do you, what are your thoughts about this? Yeah, I saw the news as well. I was very, I was quite impressed, to be honest, by the, especially the, the reprojection pass-through, the, so, reprojection-free pass-through, sorry. And something to, to keep in mind as well is, I'm really curious to see how they will implement that, of course, if they ever implement it, and how they will blend that with AI, because meta is huge on AI. Maybe not publicly, but there is AI everywhere. One of the most trusted AI scientists, Yann LeCun, who is heading meta AI division. And I'm really curious how they will blend all of that to, to improve because, for example, for the very focal, it needs, it needs to go very fast. Our eyes are moving very, very fast. So the motors and movements needs to, to be really, really fast. And as we mentioned last week, I think, one usage of AI is to predict eye movements and predict the intention behind the movements of the eyes. So, yeah, I'm really curious to see the, like the merging of all these technologies. Yeah, but yeah, just to, to add more info about the very focal, I'm, I'm always a bit, yeah, I don't know how it feels. I've never tried the very focal headsets until today. So, what is bothering me is that, as you mentioned, the motor is moving back and forth, it should be at a high speed. But how do they compensate the inertia of the movement? Because when you have something moving, you should have this movement with your head. I don't know how they compensate it. So, if one of you already tried this, I don't know, do you have any feedback on the very focal architecture? No, I didn't try it. Because they are talking about this for quite some, quite some years now. And I guess there are some issue because it's, it's never has been implemented on a large scale. I said, so maybe there's something here. About that, about implementing that on a large scale of device, I think that's the main issue is to find a way to make that reliable in time. Make that fast enough for the user so everyone feels comfortable having this kind of technology. And you're probably seeing mass market this kind of material that needs to move in all directions fast. And having a battery that can play that because there's more motors to control. Because the camera that are to track the DIs which are being abandoned on the MetaQuest 3. So, yeah, I think that's even even the cost for this kind of device, I can't imagine. The value, which is a niche, which managed to target a niche in the industry, but not the mass market. That's what Meta is, I think, wanting. So, yeah. So, like you said Fabien, I will be interested in looking at what results we get with that just to have a feeling. Of what it could be in the next future. But I would rather prefer to camera-centered than artificial intelligence that works on making the picture look great and predicting the user, predicting what he's looking at. Also with the eye tracking, which for me is more useful. I don't see that being mass produced. Maybe not yet, yeah. One thing I saw as well is to have the retina resolution, they have to reduce the field of view as well. I mean, for now. So, hopefully as the screens get better, they will be able to enlarge it. At least for now on this prototype, it's a narrow field of view. So, a little bit of compromise. Yeah, it looks very cool, but I don't see everyone working like that in the screen. It's even a step back from the current. Yeah, the information is interesting, because it confirms our analysis of the Apple Vision Pro, because they basically have this infrastructure, because the camera is on the lower part of the headset because of the screen they added. In front of the user's eyes. So, I'm very curious to see how they manage this to get the one scale one to one effect and if it really is working or not. Because, yeah, of course there will be some software correction, and we know that Meta didn't do so well doing this. The links are one, they should have less correction because the lenses are just behind the user's eyes so this is basically why it's working better with this headset than the other ones. So yeah, we'll see but still it's it's really interesting that this issue is well known by manufacturers to to have the lenses, not directly in the, just as in on the same line as the user's eyes. Yeah, just an information about that. We also tested the XLT entirely and the view inside the headset for the mixed reality is quite good too. So they manage also to have something, a projection away to use the three cameras, the one color and the two black and white on the side to make something quite great. Like you said, there's still some weird effect when you put your hand in front of the headset, like some deformation around the hand, but overall the view of the environment looks great. So just to conclude, yeah, some very interesting news around Meta, some good, some less, some better than this. So we can see that there is a whole movement around VR and AR on the software and the hardware part. So we'll see what the future of this division is and if they are still losing money, how they can maintain this whole Metaverse division. I don't think that the Oculus slash Quest division will disappear, but maybe the Horizon part at some point will be abandoned. So we'll see in the near future, because I think they don't have that much time to correct their strategy because they can't afford to lose $4 billion per quarter for an extended period of time. Any more thoughts? Okay, so it's a wrap up for today and we'll see you guys.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}