Well, welcome to episode 4 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go. So Fabien, you can start as usual. Thanks. So today I want to speak about this graph that we are sharing, which is the market share of what is called metaverse here, between headsets and laptops or mobile applications or in-browser experience. So I need to say that I'm not entirely sure of the sources of this graph, so I will put a bit of caution on the exact number. But I think the ratio is correct, which means that what they call metaverse, and we can discuss exactly what that means, is mostly accessed through an app or a browser like Crawl, Safari, and not through VR headsets. So I think it's a very interesting insight as to how the current users of what we call metaverse are using it. So to me, it's a really good insight as to spaces like Roblox or Spatial.io are mostly used directly on the browser instead of being used in VR. So I will stop here for now, and I think we can discuss, for example, Seb, what do you think about this? He's speechless. Yeah, I'm speechless. Maybe give me another idea while I'm thinking about my answer. Yeah, I can start if you want. So I guess it comes close to one part of the metaverse definition that I think is controversial, because there are two ways of defining, especially the immersion part of the metaverse. Some people say that the metaverse should be accessed by any kind of devices like tablets, smartphone, PC. And if you are lucky, you can get it through VR. So the VR part is the cherry on top of the sundae, as we say in Canada. And on the other end, there is the second definition where VR is the main way of getting to the metaverse and the tablet and smartphone are the optional part. To be completely honest, I'm on the second definition. I guess the immersion part is one of the main reasons why metaverse is useful. So we can see that right now it's not the case if your graph is correct, but I think also that the ratio must be the same. Maybe not the number, but the ratio like 80% through brother and 20% through VR. It makes sense also because lots of so-called metaverses like Sandbox, for example, or any kind of metaverses based on blockchain and crypto are mainly metaverses for financial exchanges. And you don't have any advantage of getting there in VR. You just have to buy land in 2D and make some exchanges and hopefully make some money. So they can be called metaverses because they are bringing some technological parts that would be integrated in the metaverse one and only. But I don't think these are the main – the most interesting part of the metaverses, the economical part. It can be – it will be a change in the future for sure because blockchain right now is not the best for microtransaction, for example. And I guess – well, it's both a problem through definition and what we can access right now and what we can implement for our vision of the metaverse. So that's my thoughts about this graph and what it represents in the world right now, in the metaverse world. So Seb, I don't know if it gives you some idea or if you are completely opposed to what I said. I guess what is blocking me in this way of comparing stuff is that you compare two different things, browser and VR. You can have a browser in – I know I'm using that. I tested a lot of Excel interactions that are done on a webpage on the Oculus Quest. And that's working, but that's in terms of performances and the kind of thing you can display really quickly. But it allows the user to quickly access an experience without having to download and install an app. So that's still an option to access more quickly an experience, but the browser is not that powerful and right now it's not that impressive when you do this kind of experience. But like Guillaume said, I think that should remain an option for the metaverse to access from any device and that's great, but the best experience is with VR. And right now we talked about it in the previous week, but I think we are lacking the feedback of the user face and user expression. So to make the VR experience more magical and more a key experience to move from a browser to a VR experience. Right now you are moving a character that is moving its lips if the developer implemented that. And that's the same in the browser. When you talk you can have your avatar that is moving its lips and moving its face. So there is no differentiation that makes the VR so much better that you want to move to VR right now. I think that might be what makes people stay on browser instead of going to VR. I guess this is the same conclusion you would like to bring Fabien, that VR is not interesting enough for people to adopt it. Is that right? Oh no. Fabien? Sorry. Yeah. So what I think indeed is, I think it's a twofold conclusion I would like to do on this one. Indeed, I think most users are trying to choose the easy way. So just click on the link and open the virtual world on their laptop or mobile. So that's one way. And the other one is I agree with you Guillaume on the definition of what one metaverse is. It really needs to have this immersive component to it. And through a laptop or mobile. So yeah, it's great to have access to the metaverse, but using a VR headset will bring even more immersion to it. And I think this is something that we talked about in the previous episodes here. Maybe it's something that we need to communicate more instead of displaying on with a headset on. We need to showcase what they are seeing inside and to retransmit what can be an immersive experience. And do you think that WebXR could be an answer for people to access the VR content more easily? As you mentioned that you don't want to install and do a lot of stuff to access some content. So just by clicking in the browser and getting this content directly into VR and your VR headset, do you think this would be the answer? Oh yeah, I think this would definitely help. Seb, any last thoughts? Yeah, the same with the app. We have encountered the same with mobile applications. Most of the users don't want to install an app anymore. They want to quickly access a web page and view the content in there. So I think it's the same for VR headsets. You want to quickly access without having the pain to go to a store and install an application. You want to quickly access the experience. However, right now the browser is not technically built to allow a great experience in VR. It's missing a lot of pieces, access to graphic cards and stuff like that. There needs to be a consensus between the different companies that create the browser to allow this kind of experience to be more realistic, more nicer. The WebXR, WebGL, and so on. All these initiatives seem to turn in that way. But yeah, it takes time. It's a long run. So you can continue, Seb, with your subject if you want. Sure. So today I wanted to talk about, and it's bouncing on the subject we had on the previous week, is the fact that augmented reality is also moving to industry more and more and being used in construction. For example, for real use cases. So I wanted to share a couple of videos and then we can talk about it. The first one was... All the videos I will present are about the construction helper. So this one is to check future installation with an iPad or an other device. So it allows us to preview how things will be set up. And points are highlighted also on the web. Let me rephrase that. They are checking through the construction what has been deployed, what is the current status, and what has been checked. So they can go back to every part of the install and say, okay, this part has been checked, so it's going to bring. And so quickly they can move forward to a complex setup like this and really have a vision of what has been checked, what is remaining to be installed, what is remaining to be checked and confirmed. So really use case and all that are shared with their BIM technology to be able to check out even afterwards what the person on site has done and what remains to be installed. Or if there is some piece that are missing or not correctly mounted together, they can add a comment and so a technician can come and redo the things. So it really starts to be a useful tool for this kind of thing. And then do some report, do some screenshot of the part that is not correctly set up. So that's the first one. The second one is kind of the same, but more for a complete BIM installation. And pre-visualize what will be installed on site, what will be the current day work and what will end up in one day, in two days, in three days. And foresee if there will be any issue in bringing a different company to work on the building at the same time. This is the same to pre-visualize or be able to see before going to construction, how things were set up, how to access certain things that are not visible directly from the surface. So if you want to access a specific cable or tube, you can dig in afterwards after it's built to really not completely remove the complete work, but only do some work only at the location it's needed. And the last one is a way to use a modularity tool for a worker to be able to set up an area in space that needs to be avoided by the... I don't have the name for that. The construction machine? And I feel that this is also a really nice tool for that. So the guy is basically using a LiDAR tablet, a tablet equipped with LiDAR and scanning the area and saying, okay, this part of the area, you should never go that far. And so it's sent to the construction machine and to the worker inside the construction machine so they know which area is not safe to go to. I feel that's really a use case for modularity that starts to be interesting. What's your thoughts, Martin? Fabien? You go first. Yeah, I agree. So I think it's a really interesting use case and a lot of technologies are coming together for that. I think it's the same for Metaverse, the name Digital Twin has like a sometimes different meaning, but here it's indeed like a twin version of the construction that updates with the reality at the same time. And with headsets that allow the workers to have their hands free or even, I don't know if this exists, but AR on the windshield of the construction machine or like a drone to monitor the status of the construction in real time. So I think a lot of these technologies are coming together to provide something that is really useful and has meaning for this type of work. So yeah, I think it's really nice. Okay, I'll jump in. So I'll be probably the grumpy one. As usual, AR and this kind of application are just, they are no brainers, I guess, for construction workers or industrial work plants. However, as Fabien said, there are still some issues, like the first one would be, like you said, the Digital Twin. In the BIM world, there are still, I guess, negotiation or discussion about who will pay this Digital Twin because it's not as natural as in the industrial world with the Industry 4.0, where the Digital Twin is like a completely natural and integrated use now. Everybody is doing their work line in 3D and they can use the data very easily. It's not the case still for the BIM world and the construction one. Even if in UK, there are some legislation for new building for them to have this Digital Twin integrated in the whole process. Well, still, it's not the majority of new construction to have this Digital Twin and it brings a lot of difficulties for them to have these 3D models that is needed for this AR experiences. Second point, it's the famous, we don't have the corresponding or best device for this kind of work. As Fabien said, the N3 devices are not yet available. If we are talking about HoloLens or Magic Leap or whatever, try getting them in direct sunlight. You will have some trouble seeing anything in them. Third remark is about the tracking system. We know that in broad daylight with light differences during the day and all this kind of weather and whatever, your tracking is not as clean and deter less than in the video. For having tried that by displaying the water line and electrical line in AR in the street or in the countryside, it's really difficult to have them at the right place. You are not as precise as you would like to. In the best future, this kind of application would be, like I said, a no-brainer for anybody working on construction site. But it's still not a reality for us and we are far from being there, I guess. This is my cold shower comment on this. We are all agreeing that AR is very useful. The use cases are great. But we still have to wait for us to use it as it should be. Seb, you can comment on that. No, I agree. It's the early stage and right now it's quite complex to put that in place. You agree that it's really frustrating because we are talking about these use cases for like, what, 20 years, 15 years? And it's still on the same proof-of-concept, very nice projection of what it could be and we don't have the tools yet. I think talking with one of my friends who is working for a big construction company in France, they took a lot of time to switch to BIM, but now it's in place. So they are at this stage already making every day changes depending on how goes the construction. They update their plan and the position of the stuff they have installed on the plan. So they keep a track of that and they need to do that more quickly. Because right now there is someone going on site, taking the measurements, going back to the computer and adjusting that. So they need that to move forward. And that's what he is explaining to me. This was I in the ER team in this company. So that's something they are pushing for, I would say. They know it exists. They know it's not easy to implement yet. But they are at the stage of pushing for making that happen. I think this is a great way. We can make some country differences because here in Canada and especially in Quebec, they are still thinking about is it useful or not. And for those who are willing to bring on the BIM and digital twin on the table, they mainly just stop the project when they are doing this because they are finding this too expensive or not efficient enough. So right now, when I left France, it was more advanced than it is now in Canada and Quebec especially. So we have like six or five years delay between what is happening in Europe and what is going on in North America. And in Japan, do you have any insight for this Fabien? I don't have. The only thing I can say is for small constructions. And I think it speaks to the budget as well. Small houses in Japan are in wood and they are like constantly. There was one construction just here and they are not even using plans. Like they're just building it. Yeah, that's what I can see with my eyes here. I know for a fact that in France they are quite advanced because there was another project where they had to renovate like 300 houses. That has been done at the same time with the same model. But through times, I've had some expansions, different stuff made by the person living inside the house. They have to make a business model and estimate how much work it would be to renovate all of them. Depending on the different configurations there is now. And they use AI to simulate everything. Simulate the different kind of house and quickly have an estimation on how much the whole thing would cost. And the different solutions they could explore. So I think they're already on the edge of all those technologies at least in France. Yeah, great. Okay, so do you have anything to add or I can bring my own topic? That's it for me. Okay, so we'll stay in the AR world. And we will be talking about Google that is finally putting their Google glasses out of misery. And I would like to celebrate this ending by taking our time machine and going back in 2015-2016. When the AR glasses were like a trend or a complete frenzy if I can say. Because during that time we had like 30 different devices announced through different manufacturers. You can see here a little example of what were coming to the market. And in first place you had the Google glasses. One thing I would like to say that you can still have some of them right now. Like the Vuzix, this brand is still working on smart glasses right now. And you still have of course the Microsoft HoloLens. And I think the Magic Leap was not there yet. But you can see some great brands at the time that were luster for example in the AR world. They were very implied in the industrial world. You had the Optinvent or RX. And you had some great names as well like Epson and Sony that were willing to get this part of the market. So I'll just do this one. So you can see here the number of glasses that were announced and their expected timeline. And this was like in 2015 as I said. And you can see that every company was willing to have their AR glasses. I guess just a few of them arrived to the final point of the project shipping. Because most of them just abandoned the project midway. And one interesting thing is that in the last slide is that – sorry. Okay. So at this time when people were coming to doing their prototypes, they finally realized that the size of the system and the field of view were something that matters. And you can see that only a few of them were answering what we can say the minimal requirement for having powerful or useful AR glasses. You can see them in green. You had the Magic Leap. I guess the HoloLens should be far from there. But only a few of them could have worked at their AR glasses. And that's why most of the glasses right there on the left never saw the market. So I know that you two tested some of them, maybe most of them. Because you were very involved in the AR development world in 2015. And I would like to have all AR guys' insights about what happened in 2015, 2016 and why right now in 2023 we just have like three devices left, like the HoloLens, Magic Leap and I can say the HoloKit you presented maybe a few episodes ago. But the market is very, very dry right now. Yeah, we have Apple that should be announcing their AR glasses in June. But compared to what happened in 2015, it's night and day. So what happened and what will be happening maybe in the future? What do you think about this? I think I will let Seb speak to that. You have more insight than me on this one. Sure. So I had the chance to be at CES during that kind of year and also in Barcelona for the mobile events. And I had the chance to test the HoloLens, the first HoloLens version really quickly and the Google Glass right when they were announced. I tried also the physics one. And clearly, as soon as I tested it, there was a huge gap between Google Glass and this kind of idea of having only one screen on one eye compared to what was feasible with the HoloLens. First of all, the experience was awful for me. Having only something stick to one eye was really a lot. So I think from what I heard, that's what happens for most people. It's like having stickers in front of one of your eyes and staying always in place. That's not natural and that's not something you want. And the HoloLens, compared to all of the other devices, was the only one that allows good environment tracking and to really lock an object in space. And that's what remains for me the key component that needs to work perfectly. You need to be able to recognize the space when you walk in quickly without having to have a complex calibration to do. And when you place objects in the environment, you want them to stick to the environment. It was all right. You don't have a great experience. I think you don't have this one, but there was a meta headset that was released also and that we tested at the time. And it was not tracking at all correctly the environment. Everything was moving through space with you with a kind of delay. And that's the same with HoloKit. And the guy actually from HoloKit, I sent an email asking, is it the reason there is an issue? And they said, yes, we know. We are working on that. The issue I had with HoloKit is not due to the version of my mobile device. It's really the code and the way they implemented it that is not doing prediction in the future on where you are and just using the data from the IRKit component, which is delayed. So, yeah, overall, even the Magic Leap, it was interesting. We used it for Intel back in 2018, maybe. And the issue there with the Magic Leap is that it's never been able to recognize the space by itself. You always have to do a small calibration. So, the environment is back on track and correctly tracked. And if there is a shift in space, if you move a lot from your initial position, there was no way to realign the environment based on the 3D tracking of the environment and the component cloud that you did on the environment. So, really for me, the only one that is doing that greatly for an augmented reality use case is the HoloLens. And the HoloLens 2 just brought another level for the rendering. But in terms of tracking and 3D environment tracking, the first version was already great. And also, they fought for one and they fought about how to do calibration with one HoloLens and deploy that to all the others. And that's what they brought to the HoloLens 2. It was not feasible with the HoloLens 1, but with the HoloLens 2, they brought that and that allowed us to use that in the museum where there is a host that is hosting the experiment but doesn't know anything about the technology. And that's been running for two years in a row in Mozika in France and in Paris at the Galerie de l'Evolution. And yeah, we don't have any maintenance unless there is a headset that goes down on the floor where they need to ship it back to Microsoft. In terms of calibration, we never did anything since we deployed the application. And that's really impressive. Fabien? Thanks. Fabien? Yeah, I completely agree that, and I think we talked about the Xiaomi glasses that were showcased a few weeks back, where one difference that they have is indeed the world of tracking. It seems like the difference between something that is really used for a real-world use case is something where there is environment tracking. Maybe just having some information in front of the eyes is not enough for the user to, like the friction is too big, and maybe just having a smartphone or a tablet is enough to get the same kind of information. Yeah, well, I guess we can get back to what we said about the timing, time to market, and maybe those glasses were not released at the right time because technology was not as evolved as it should be. It's like with the Metaverse right now. People are thinking that it would be deployed in less than five years, but when you are looking at the paper, lots of different technological elements are not there yet, so we know that it won't come as far as it would like to in the near future. And that's what happened, I guess, with every innovation. We had this with VR, we had this with AR, AI as well. In a few years back, when they tried to learn the first, it was Microsoft with their AI chatbot that lasted like 16 hours because things went very, very sideways with their AI bot. Learning from what people said and it didn't go well. Just one thing. The Google Glass also thought that it could be useful for the public compared to Vuzix that had the same kind of idea and technology. But Vuzix, I think, targeted more the industry, like Amazon or stuff like that, where they have to take boxes and prepare a command for a user and where you can display step-by-step action inside the headset and have the user validate each step. And so it's really small information. Just do that now and step-by-step. That was, I think, more the use case for this kind of glasses. That was interesting compared to Google Glass, where they tried to sell that to the public. Well, I think that Google Glass made one of the biggest mistakes by overselling it and under-delivering it. Because I found out that the launch video of the Google Glass is still on YouTube, if you want to see this. But what they proposed is absolutely not what the glasses can do. And I remember that on day one, when people got the glasses on their hands, it was like, wow, it's absolutely not what they said. And I think Fabien was there when I first put them on. It was in Paris. And I said, wow, it's so bulky. It lasts like 30 minutes or less than one hour if you want, because the battery was bad. And you had this kind of migraine after 15 minutes. Well, they could be launching this video right now. It could be right. But you see that there is a 15-year time window between what they presented and what is the reality right now. So they were way ahead of what they could possibly do at the time. Like Magic Leap. Yeah, Magic Leap. So HoloLens, their launch video as well, they made kind of the same mistake, but they corrected their mistake by saying, well, what you are seeing is not what is inside the headset like two days after the official video. And they corrected their mistakes. And this is a good thing because, well, the expectation of the public was so high when they saw this spectator view slash AR view of what the people on stage were presenting. And yeah, I think this is a great marketing case of what not to do with innovation if you don't want your product to be completely buried on day one. It happens on this kind of technology. I think for VR headset, that's more okay to increase a bit the quality of what you present. But for augmented reality, it's ridiculous what you see when you put your headset. So the comparison, it's far from what you see. Especially on the field of view, it's always, yeah, you're very frustrated by the keyhole effect of the AR devices. Everything I put the headset on someone, the first thing they say, oh, the field of view is narrow. And then they start to look into the experience. And if you think about the experience correctly to avoid seeing too much that border on the screen, then it makes sense and people forget about it. But yeah, there is a first, first, oh, I don't see that. This is not a wow effect, this is a oh effect when you're putting the headset. With HoloLens 2, it's like a mix. It's narrow, but oh, it's cool. Oh, it's really sticking to the environment. That's nice. But oh, it's narrow. And that's like coming back and the experience is amazing, but at least the field of view is small. So, well, we are a bit more than our 45 minutes, but yeah, I guess I'll thank you both for this very interesting discussion, and we'll go back. We'll be back next week for our next episode. So have a nice evening, day and morning for every one of us and see you next time.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}