Welcome to episode 24 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go. Fabien, Seb, welcome. So who wants to start today? I can start. Okay, let's go. Okay, so today I want to talk about an article of someone we already shared content from before, which is Carl Kutag. He's an engineer who has worked for many, many years on computer vision and video processor and graphical and so on. So he has quite a lot of knowledge about what's going on and he's recently been very interested into VR headsets. And we already talked about what he said about I think the Apple Vision Pro when it went out. And the title of this article is he explained why he thinks that using VR headsets, so he talks about Apple Vision Pro, but thinks that it's similar for the meta and basically all the current headsets. He thinks that it won't work as a desktop screen replacement. So this is basically his message in the article. So when we talk about desktop screen replacement, we mean actually working in a VR headset with multiple windows, text, email, spreadsheets, all of this kind of classic desktop work, coding maybe. And so you have probably seen all these images like, let's see, these two images here. So the meta and Apple devices showing windows all around you. And of course, these are like marketing images. These are not like actual screenshots from the headset. And he goes very deeply into technical details. So I won't go into it. But basically, there are two main issues that he is mentioning. The first one is the number of pixels per degree of vision. And on top of that, there are distortions that are created by the lenses on top of the screen of the headset. So you can see a screenshot here of a virtual desktop in the MetaQuest Pro. And when you compare these two, this is the screenshot of his desktop. And so it's not visible here. But if you zoom on the images, you can clearly see that on the real picture, the text is really visible, it can clearly be seen. But on the headset, inside the headset, there are like distortions and color aberrations that make the same font size really difficult to see. So what he says is like, in order to be able to clearly see something in the headset, we have to scale up the fonts, and therefore losing space. So and you can see the second distortion here, the second issue, sorry, the distortion is on the top, you see the initial images and in order to be corrected, so you can actually see something vertical, there are like adaptations and corrections that are made. And that leads to distortions on the font itself as well. So that's basically it. He goes, as I said, into very, very deep details about why all of this happens. But I'm curious to know what you think. And if you have tried this, basically, the message is like, yes, it works. You can see many windows in your headset and work in your headset. But will the user do it? Will it lead to like eye fatigue, and other things like that? With things that, you know, headsets, weight, and all of the other things that are a bit aside of this. So yeah, that's basically the main issue that I want to talk about. Let's start with you, Seb, maybe. Yeah, I tried some sharing my screen on my Oculus Quest Pro and trying to work like that launching like Unity or click buttons. And it's very weird. I think there's also something about the interaction, we are quite used to have a screen in front of us and the mouse to be precise on where we want to click and the keyboard accessible to type. If we look at the Vision Pro, they don't have an interface yet announced to interact with it. You can see your keyboard, Apple keyboard, but or use your Mac. But I think that's also a point on why this won't work, at least not as not until we get the perfect interaction interface and the perfect rendering on screen. And I haven't seen the result on the Vision Pro. But if what he's saying is true, then if the font needs to be bigger, if everything that you display needs to be bigger in front of you, then it loses interest because like you said, we will lose space instead of gaining some. The only point would be that you will be able to use it everywhere. Even when you travel during in the train without having other people looking at what you are doing. That's a good point. Yeah. Yeah. What about you, Guillaume? Are you ready to work in the headsets? Yeah, that's, that's, that's funny because we discussed about it with our with some of our, of my colleagues at work. And this kind of use case is very, people are really willing to try that. Especially some people really like to have a lot of big screens in front of their eyes. So there are people waiting for this. I looked at this article as well. And I was very surprised. First, that the IPD or the Pixar resolution of the Apple Vision Pro is so low. Because we don't really have information about the global resolution of the headset. But for me, it's very, very low regarding what they want to do with the headset. So maybe we'll try to, we will have to wait to see what it's really the final resolution of the of the headset. Then we know that the Quest Pro is really not the best headsets to do this kind of tests. Because it is known to be to have a poor AR support. And maybe we'll see with the Quest Pro, with the Quest 3, if they improve this, this AR support and especially the capability of reading text. I'm not sure about this because I guess the resolution is not that high either on the Quest Pro, on the Quest 3, sorry. Another thing is that, once again, it confirms our, the fact that the camera are not aligned with the user's eye. And we know that with the Apple Pro is maybe one of the worst one, because the camera is very, very low in the lower position regarding the position of the Quest Pro, for example. So the kind of distortion that he's talking about would be an issue for Apple as well, and maybe even worse. So this is really not very encouraging for the use case that they are promoting, because this use case is basically what the Apple Vision Pro is made for. So it's, if it's the case, it's really risky from Apple to present it like that. So yeah, once again, we have information about this Apple headsets that are not very reassuring about the final product, or maybe there is a fine solution before it is released on market. And finally, the final point I would like to talk about is that Tim Cook made an official statement that he is using his Apple Vision Pro headset every day. So I don't know what he's doing with this. So yeah, why not? This is a marketing statement once again, but if you are doing this kind of statement, I hope that your headset is powerful or capable enough to make it something that is worthwhile. And so more and more mystery about the Apple Vision Pro. Yeah, yeah. Something I wonder is, so you talked about the position of the cameras on the on the Vision Pro. But, um, so I think there are two issues there, right? There is the ability to read text on a real paper, for example, and the ability to read text on a virtual window. And I guess these two are maybe not linked. If the quality of the, for example, on the Quest Pro, the quality of the MR is really low. So the ability to read real text is even worse, right? Yeah, it's impossible. Almost impossible. Yeah, you can't look at your screen and see, pass through the letter or the document that is in front of you, even the camera, the brightness and the color don't work. Probably same with your phone. If you look at it, it's completely, it's too bright right away. And you can see some shape of letters, but you can't, you can't really read it. That's not feasible. However, in the headset, in the VR experience, when you look at a piece of paper, or there's some text written on it, there's no issue reading it. Yeah, basically, it confirms what we discussed about last week with the two prototypes that Meta is working on, with the Baryfocal one, for you to have this kind of different depths of screen for you to read. And the fly mode, I don't know how they call it, but the fact that you have a matrix of lenses for you to be able to see the real world without any distortion. So basically, it confirms that the issue is still on, and they are trying to find a solution. And yeah, obviously, the Apple Vision Pro won't find the solution, because it doesn't seem to have this kind of technology. So we'll see how the mainstream public is welcoming this kind of issue. Maybe they'll just find it completely unusable, or maybe once again, the human species will adapt to this, because the Apple Vision Pro is so cool that you'll have to work that way, and they'll find some way to adapt. But yeah, as you can't wear glasses with the Apple Vision Pro, I hope that people won't hurt their eyes. And once you've been using the headset for several weeks, you will have glasses and that you can't use your headsets anymore. Well, jokes apart, it can be a real problem there. Okay, yeah. So, I think that's it on that topic. Yeah, I can take over if you want. So my topic was about a transparent screen that is now installed in the subway in Tokyo, where you can speak in your language in front of an attendee that is answering your question in your language, and it sounds clear, but it's not. So you have to translate directly, automatically, to Japanese, and same back. So I will let you watch the video. They are saying that in the subway, Google Translate is not available because you don't get access to your network. And so this is very useful for them to exchange with the people and get the right ticket to go to the place they want to go. Yeah, I wonder, Fabien, if you have seen that in Tokyo yet? I didn't see it yet, but I wasn't aware that it was there. So thank you for that. I will have a look. So there are a lot of challenges that Japan is facing. Of course, the language and the number of tourists that is really, really growing and increasing. And in that station, Shinjuku, it's one of the most difficult to find your way out. So I understand the need for this type of technology, and I'm really curious to try it here. They say there is 11 languages that are available, so that's quite a lot. You're fluent in Japanese, so you don't need that. Yeah, but I will fake it for the test. Some thoughts, Guillaume, about that? Yeah, I'm really happy to see finally a useful use for these transparent or semi-transparent screens, because it has been on the market for quite some time. And despite the fact of the global presentation for you to be able to see information through transparent screens, there wasn't any very good application of this screen in real life. And this one is very, very smart. So for having worked with this LCD transparent screen, I'm very, very happy to see them in something that is useful for everyone. And despite that, if it works, they will have to put this everywhere. So I hope they have the whole traction for them to make a bigger project. But yeah, very, very smart, very cool way of very good integration. And I hope that Fabien will do us a live report to see if it works, or is it just once again a gimmick. But it seems to be real, and I hope to have your feedback on that. Yeah, I will go try it. All right. And the other subject I wanted to bring is to another interactive VR experience that I found funny. I didn't thought we were there yet, and we were able to use it in that kind of use case. The use of VR headset in slide, and they seems to have made the experience quite great for the user. Everything is clean before in a specific case, and then you can select your ride depending on the color of the front part that you put on top of your headset. So I think there is an NFC tag that protect the headset also in the case, but allows to also select the ride that you want to do. So you can do multiple times the same slide, but it won't be the same ride every time. So I think they make it funny, and it's amazing that they managed to bring that into a watery environment, something that the headset usually don't like. So I wonder if you have seen that kind of experience. I did one in Europa Park a couple of years ago, but it was in a roller coaster, never in a slide like that. Yeah, I think we did the same one in Europa Park a few years back. It was really nice. I just wonder how does the synchronization happens? Do they think like people usually all go at the same speed and have like one set speed for the experience? At the beginning, they are just showing here the kind of laser that is looking when you go in the slide. So then I guess your speed is kind of predetermined, or maybe there are several through the ride because depending on your weight, I guess you are going faster or slower. So I think that's the way they do it, but they don't explain that in the video. I guess that's their secret. Yeah. It seems to be easier to track the kind of boat than the headset itself. We can see there the sensor. So checkpoints or whatever. Yeah, well, I'm just a bit curious of what the effects would be with this. I know this kind of water slide VR integration has been on the market for quite some time, but apparently it's more integrated and obviously more easy to use. And I guess the immersion is better as well. But I am very curious to know what kind of immersion or effect you have in this kind of experience. You talked about the roller coaster, but when you're on a water slide, I guess it's even more intense. Because I guess your brain is completely overwhelmed with the sensation between the water, the movement, and the VR. I don't know how you can manage to have all this kind of information at the same time. And the other question I'm asking myself is, what's the point in this? Because you are here for a water slide and you are not living the water slide experience. You are doing more of a roller coaster one. So I'm not that convinced about the experience. Do I really want to have a VR headset while I'm sliding in water? I don't know. What was your global experience feedback with the roller coaster one? Do people like this experience or it was just like a one time experience and then I go back to the roller coaster one? I think it was like five years ago, but it was a nice way to explain the same ride a different way. So you can do it without VR and choose to redo it once with VR. And now they even have, I didn't do the experience yet, but now they have a different onboarding place for the same roller coaster. And depending on where you do the queue and you onboard, the experience is different and you go on to another ride. So you take the same headset, the same ride. You don't start at the same point, but you experience a new story around that. So I think it was nice. Now there was the effect of the wind, the effect of the ride itself, no water. And here I wonder like you, how does it feel to have a sensation that you get in real, but you don't get inside the headset because they don't reproduce the water. I guess that will mess with my mind also during the ride. I hope they don't have the throw up effect too often because if you're on a roller coaster, it's not that much of a problem. But if you are in a roller coaster, you have to close the whole thing to clean that up. So I hope they don't have this kind of issue too often. One thing is it's not ending in the pool, right? It's more like a water slide. I think. But back to the roller coaster one, at Europa Park five years ago, we did that at the same time. I was actually very impressed by it, especially because everything was really well synchronized. So like the loops and the turns and everything, it was really, really matching the movements of the body and the movements in VR. And it was really a great experience. You start on the virtual train and then remove the rails so you start to fly. So the feelings was really good. So I hope it's the same in there. And we experienced two things, two different things. One was fully in 3D and one was a 360 record that moves you forward in sync with the train. And yeah, the two experiences were great. I was preferring the 3D experience because it was more interactive. Like Fabien said, they removed the rails at one point, so you were feeling like you were flying. And especially when you go high and then down quickly. What is the effect name, Fab? Airtime. Airtime, yeah. When you have this kind of effect where you feel that there is no gravity anymore, that's where they choose to do some effect, specific effect. It was really enhancing the experience or at least making it different than the real one. And because I have a lot of, I did a lot of, I tried a lot of rollercoaster in VR and I was very sick of that. I was kind of afraid it will do the same, but the synchronization they had over there and the fact that your body experienced the same effect that what you see inside the headset, it's just the perfect feeling. That's really the best immersive experience I've done, I think. Nice. Okay, so anything more to add about this water slide experience? Okay, so I'll go with my subject as well. So for today, I would like to talk about this new trend, which is all about NPCs, AI NPCs, in fact. I've seen that in the past weeks, there has been a lot of companies that are proposing NPC AI assets for Unity. So there are three of them, to my knowledge. I tried two of them. You can find them on the Unity Asset Store. The first one, maybe this one. So there are the two that I tried is the one made by Convay, I don't really know the pronunciation. And the other one is by InWorld AI. They are basically working the same way, meaning that you have the asset that you can integrate directly into your Unity scene. And after that, you have to configure your characters and also the role of how the character is behaving inside the web platform that they developed. So to begin, between those two, between the Convay and the InWorld AI one, I really prefer the Convay one, just for you to show. So this is the dashboard for Convay. And this is the dashboard for InWorld. So basically, how it works is that you are linking, you're creating an account, you have an IP key that you copy paste inside your Unity. And once you've done that, you can create and customize your characters inside the web-based interface. So you have some simple characters here. And how it works. This is very surprising because all the avatar from one solution to another, they are all based on Ready Player Me. So I'm very surprised to find out that this company, Ready Player Me, is really spreading its innovations through all the metaverses and applications that are using avatars, because they have some kind of partnership as well with VRChat. And I heard that they also have their own AI solution. So just why it's not working, I don't know. So basically, when you are creating your own avatar, you have this Ready Player Me plugin, and you can customize your character as you want. You can see the 3D rendering of it. You can name your characters, choose a voice between a full set of digital voices that are very... they don't have much of artifacts, like some artificial voices you can hear. And after that, you can describe the character's backstory, meaning what kind of role this character will be answering to. And this is the fun part. In fact, the pricing strategy for those companies is based on this character's backstory, meaning the more words or information you have as an input, the more you will have to pay, which is kind of smart, because they are providing us with a simple solution for us to try. You can use... you have 2,000 words for the free one. And then you can try here, using the microphone or directly with the conversation. And the fun part about this is that the answering process... The answering process is not that long between the question and the answer. And it is the same when you are using it in Unity. You can access the microphone, you just have to press the spacebar or whatever button you want. You are speaking freely. You have the speech-to-text feature and the answering text-to-speech from the avatar. And the delay that I found on other previous solutions is not there anymore. You just have less than one second between the question and the answer. So it's really well integrated. And once you've created your characters and so on, you can directly download the avatar and the character inside Unity. You have a specific plugin that they made. So really nice solution. Very well integrated. I don't try this as much as I would like to, because I didn't try to make some errors or weird behavior with the avatars yet. But even if you have this kind of very small backstory, the avatar is... Well, the AI is really well answering, because you have the whole AI experience plus this backstory. So even if you are not very precise about this backstory, if you are asking what color the avatar likes the most, it can answer this completely naturally. So very interesting for you to have this kind of AI and PC very quickly integrated in your Unity scene. They also have WebGL support, which I really like, because for me this WebGL or WebVR way of doing is some kind of the future, or maybe there's a way to get us to some kind of metaverse at some point. So yeah, very, very interesting. One, just one tips for you. If you want to try these, do not get this through the Asset Store. You'd have to download them directly on the website, because for some reason the Asset Store one is not working. You have some errors and the plugin is not installing correctly. So maybe it's on my part, but the only way I made it work is through the official website. And this is a method that is presenting when you are doing the tutorial as well. So just for you to know. So what do you think about this, guys? So it's very interesting indeed about Ready Player Me, because I've seen that they are also doing a partnership with 8th Wall. So in the 8th Wall SDK, you can have the Ready Player Me avatars. And yeah, it's really interesting. Do you know if they have their own AI or if they are using like OpenAI or maybe the open source Facebook meta one? Yeah, continue please. Recently, a few weeks back, OpenAI added something that I think will really help this kind of thing is context. It's exactly what you described. So you can add a context to the chat, basically, and the context is always applied to all the answers, all the questions, all the chat. So that can really help driving forward, pushing forward this type of technology. And I'm really curious to see when that gets used and how it will be, how successful and how accurate it will be. Like if we get some weird behavior. Yeah, really cool. On my side, I wonder how it, like you said, you can ask what is the color of their eyes or what is their favorite color to the NPC. Does it go into their history? So if you ask again to the same NPC, like one week after having... Yeah, I guess it's just some generic. Okay. Okay. Sure, no, stuff like that. Yeah. Yeah. Sorry, guys, I had an internet disconnection. So I'm back. Yeah. We were asking, do you know if there is kind of a history? Like if you give information to the avatar. Oh, if it's learning from what you've talked and then feeding the data. I don't know about this. Apparently, some people are saying that it is based on the chat GPT-like, but there is no mention about OpenAI itself. So I'm not sure what they are using as an AI engine behind that. But I would guess that as every company that is doing AI now is using your input as a feed for the AI engine. So I guess they'll be learning from this just to get more information and make it better on the longer term. So my guess would be yes. But, yeah, just another thing is that when you are downloading the assets, they are presenting a very nice avatar. This is not what you get when you are downloading it. It's just for you to be... This is the ReadyPlayerMe avatars, which are more cartoonish. And this is my next goal, is to make one of my 3D scan avatar work with this. So I'll keep you updated with this. But, yeah, you don't have this kind of meta human-like avatar. You will have to work to get this at some point. And I guess it is more the maybe the Unreal integration that we are seeing here and not the Unity one. It feels like more Unreal vibe behind this. So, yeah. Do you have anything more? And just, yeah, for the complete... Yeah, the in-world one is more a global AI conversational application. So, as you can see, the whole interface is less oriented towards Unity or Unreal integration. It's more global ones. It's why I'm not liking it that much. It's harder for you to be able to integrate. But on the other side, it seems to have more features or control over the avatar. So this is something that I'll have to confirm at some point. Their tutorials are way more precise and maybe longer. So if you are in a hurry, I guess the ConveyOne should be better. And maybe this one, if you have to have more in-depth control at some point. And as you mentioned, there is another one made by Ready Player Me as a whole. So maybe I'll try to get this one as well. Okay, so if you don't have anything more to add... Are we okay with this for today? Yes. Okay, so it's a wrap-up. So thank you for all this very cool information.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}