Welcome to episode 49 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR writers, we will discuss the latest news of the immersive industry. First of all, sorry for last week, we had an hour change issue, because here in Canada we change our hour, so I was only at the requested hour. Now it's corrected and we'll have the same issue in two weeks I guess, but we will be prepared. So let's go for this episode today and welcome guys. So Fabien, if you want to say something. Hello, okay yeah. So I have a few updates after a couple of weeks with the Apple Vision Pro and I also tested Roblox on the Quest, so these are my topics for today. And let's start with the Apple Vision Pro. So compared to two weeks ago when we initially tested, I showed you the persona and there was a Vision Pro OS update and now the persona looks much better. So I know that on this video it's a bit smaller, so it's not really easy to see, but there was really a huge improvement on how the head looks. It was already pretty good, but now it's even better. So yeah, let's see if I can pause here when zooming in. Okay, so I have my eyes wide open, but you see I was wearing this similar clothes with a white t-shirt below and it rendered pretty well. So yeah, that's a major update and the expression and the face tracking is working really really well. So yeah, that's the first update I have. I don't know if you have any comments or if you saw some other topics about this personal update, but yeah, pretty nice update. Well, I guess everyone agrees with you that it's better. The main question I have is are we out of the uncanny valley? Because some of the users said that you had this cringy state of mind when you saw the persona. Is it gone or is it just better? In my perception, it's still there. It's not completely gone. I don't know if it's because I know that I'm looking... First, I'm looking at myself and I'm looking at an avatar, so there is still something, especially I would say on the eyes. So maybe it's because it's inside and the tracking is more difficult than tracking the expressions, but to me there's still something on the eyes that makes it uncanny. Is it because it's not 100% responsive and you feel like it's a bit static sometimes compared to a normal face where it's always blinking and always moving slightly? Yeah, I think all the micro expressions that we all have are not... Well, you can see, but yeah. And also, I have short hair. I think we talked about that last time, but I'm really curious to see how it works with long hair. Okay, can I switch to next? Okay, so now I wanted to show, because I don't think we did last time, the difference in low light between the Apple Vision Pro and the Quest. So I did the recording exactly at the same time in the day. So I've just set the light up in my office and I think we already said that and everybody's saying like there are absolutely no distortions. You can see your hands perfectly, there are no distortions on the furniture or in what you are seeing. So the only thing that I would say is when the light is low, when you move your head, the video starts to be blurry and to me it's really... Like you can see it and it's really disturbing to have a blurred video during this head movement in low light. And if we compare with the Quest now. So on the Quest, I would say that the color is brighter than on the Vision Pro. So I don't know exactly why, but I think the video itself is a bit brighter. But I mean, like the distortions are there. Looking at your hands, you see distortion all around. So the Apple Vision Pro is still up. The quality is of course much better than the Quest in the same lighting condition. And yeah. So that's my two major differences, I would say. Yeah, but which one is the more fidelity to the real world? Is it the Apple Vision Pro or the Quest 3 with its higher brightness? Yeah, the Vision Pro is the best between the two. So the quality is better. It's not really visible on these videos, but there is less grain on the video. Even if it's blurry, but just the fact that there are no distortions, it changes everything for me. Like it's much easier to grab things and to walk around. So I would say, yeah, the Apple Vision Pro is much better in terms of fidelity. Yeah. And I saw some feedback about being in low light and having the hand tracking being slower or having some delay on the hand tracking. Were you able to compare the reactivity of the hand tracking on Quest 3 compared to the Vision Pro in this low light environment? No, actually, that's a good point that I need to try. But I can say for sure that the hand tracking has delays. Yeah. Like you see it in the app. Yeah, there are some apps like where you can take something and move it. And I mean, you see the delay. And even if during the occlusion, if you move your hand fast, you see a delay between your hand and the video. That's a good point. I will try the hand tracking on the Quest and on the Vision Pro in the same light conditions. Yeah. If you can also test maybe to place an object on your wall or on the floor and try to move around and compare also if there is some FPS drop or some kind of weird behavior of the object you place in the room in low light. I know on the Quest 3, it seems like for me at least, it seems like when I'm in low light, it takes more time to render and it tends to shift a bit in position. If I put something on my desk, for example, if I move around, I can see in low light that it's moving much more than in bright light. Yeah. That's a good point. I would say overall, my perception is the tracking stickiness, if that makes sense, of the Quest 3 is better than the Vision Pro. With the Quest 3, when something is in the space, you move around like sticks to the millimeter. With the Vision Pro, I don't know how to say, but you see it's moving a bit. So, yeah. Talking about this actually, I did test building a Unity project and launching it on the Vision Pro. I will get back to that stickiness because I was able to test with high polygon model in Unity. So, first about the developer experience itself, we talked about that months ago. You need a Unity Pro or above to be able to build the project. But other than that, it's as usual with Apple development experience. You need to have Xcode, you need to download a ton of things. But once it's set up, I just build the sample from Unity, launch Xcode, and that was it. It was on the Vision Pro. So, no real difference from developing for iOS. I had to activate the developer mode on the Vision Pro to pair it with my MacBook. Nothing out of the usual experience, I would say. Now, the interesting thing with Vision Pro is when you create an app, you have two modes. So, you have the... I don't know how they call it actually. So, maybe I will use a bad vocabulary here. But there is one where it's like kind of multitask. So, you can have Safari, your photo, and your Unity app. And you can build experiences in that small space. And then, you also have the ability to take up the whole view and to use the space tracking and all the immersive experience. So, here, let's see. So, yeah. This here, you can see this is the Unity app. And I loaded just Disney plus to show that you can have a multitask, multi-app experience with your Unity app. And then, you can switch to the full immersive view. And the default example here in Unity is you can detect the surfaces and tap on the surfaces. It just instances object, just pops an object where you tapped. So, you can see me trying this here. Yeah. So, it's a pretty simple experience. And just for fun, so I loaded a very high polygon model. Let's see. Oh, you have. So, you have physics. I didn't mention that. I forgot to mention it. But you can have physics. It's Unity. So, you know, between the surfaces, it's pretty easy. And so, I loaded here. Here, you can see I loaded two very high polygons. Just loaded as high as I could see. And it starts to slow down. It's not really visible on the video. But if you look here around the menu, you will see some distortions happening. And this is due to the high polygon. And something in the rendering starts to lag. And it's really visible in the headset. So, I don't know how many polygons I had. I think it's millions in there. So, it's still quite good. But yeah. When that happens, the 3D is lagging. The video was a bit distorted around it. So, it was pretty strange. Is it one million or several millions? Several millions, yeah. I can look at the numbers. And when you are looking at that, the whole scene is lagging as well. So, I mean, I would say it's pretty usual experience when the scene is reaching the limits of a computer. Like, everything starts to lag. Yeah, but you were not surprised by the fact that yeah, sometimes you can have lots of triangles. And at some point, you say, wow, there are lots of triangles before it lags. Here, you're not surprised that this is the usual limit, if I would say. Because you just have two objects, meaning very heavy objects. But you were not surprised by the power of the Apple Vision Pro, meaning that you didn't put like 10 cars and say, whoa, I can put 10 cars of this kind of mesh. So, I guess it is... Did you do the same experience with the Quest 3 to see if they are lagging with the same model as well? That's a good question. And I didn't yet, but I should do it, yeah, just to see. But yeah, I think it's especially very, very heavy. So, I think the Apple Vision Pro still packs a lot of power. Okay. Yeah, we need to make a smaller lab, a Unity project so we can test on both and display, drop some object and display in the UI how many polygons are displayed currently. So, we can really adjust that. That's something I need also for the Quest 3 so I can say to our graphists how much polygon can they work with on our scenario for our project. So, yeah, I think we need the same for Vision Pro. Okay. And lastly... Oh, sorry. It's not Roblox. It's Roblox. I did a typo here. So, we've been talking a lot about Roblox over the past year and so Roblox is now on Quest 3. So, last time I tested, I wasn't really careful and I just tested the first experience that was in the Roblox like home panel. And what I've learned actually is that the Roblox developer actually needs to enable VR. They need to do some things into their Roblox studio when they push and when they create experiences. So, the one that I tested, unfortunately, it wasn't ready for VR. So, of course, my experience was very bad. Like there is no way to properly move your character. The UI is not ready for VR controllers and so on. So, I've tested this one and you can see me being very bad at this experience. But anyway, this one is really ready for VR and it's really working very well. I didn't have any issues just loading the experience. It's compatible with Quest, with the controllers. It's just that I had a hard time figuring out what to do in this experience. But anyway, everything was working really nicely. So, it's a bit of an update since my last comment about Roblox VR. When the developer actually enables VR, it's working quite nicely. So, I mean, it's the problem with user-generated content. We are dependent on the user. So, yeah. I don't know if you have any comments on that. Well, it's still ugly. Same comment. But you have many words. That's only one, right? Yes. Yeah, many, many. And I didn't really figure out the mechanics of the game. Some people would wave to them. You can see here me waving at someone. And someone else would just kill me immediately. Like, I didn't really understand what was going on in the world. But anyway. So, yeah. Wave. Hello. But yeah. Anyway, that was actually a pretty fun experience. Yeah. I had a hard time figuring out how to reload. You can see me trying to reload here. But anyway. It was working very nicely. Okay. Anything more? No, that's it. Okay. So, Seb, what's your subject for today? So, today I have two projects in progress and in development for the Apple Vision Pro that I wanted to share. One is for showing an old building on top of a new one to show the heritage of and showcase here how it looks before. So, I think the result is quite nice. I think they did a Gaussian splatting before the building was destroyed. And so, they are able to reload that in the Vision Pro. And yeah. Display how it used to look like before. Maybe you can already comment on that. Yeah. It's interesting to see that the old use case we used to do with AR and our tablets is way better now. And it really makes sense. So, yeah. It's very, very nicely done. Not sure that we can still improve the 3D scans or how the building was taken when it was originally built or there before the station. But yeah. Very interesting to see also the anchoring in broad daylight. We already mentioned this in the previous weeks about Apple Vision Pro. Very interesting to see that it can work very, very well outside in sunny condition-ish because we can't really see the sun in those streets. But yeah. I guess we finally have our, yeah, it's not BIM, but building use case. Very interesting. Yeah, totally. And this is showing an old building, but you can imagine the same as showing what it will be with a new building that will be replacing the current one. So, yeah. I'm curious to know how the setup is done, like how they initially place the building. Did they find some way to get the geolocation and to automatically place the 3D or is it something that they have to do manually? I'd be really curious to know the process for that and how that works on the Vision Pro. Otherwise, yeah, it's a very nice use case. Because right now, Vision Pro is possible to do some anchoring, right? So, like to put your hand on the floor at one point and this is one anchor, one place and you put your hand on the floor and it's a second anchor and then it can position the building right on the spot. So, yeah, this is what the Unity sample is doing. So, you can tap on the surface and place an object. My question was more, does it save it also for the next session? I didn't have this experience, no. Every time that I was reloading the app. It was not saving your previous anchor. Okay. That will be something to dig in because if every time you load the Vision Pro and you have to do the same calibration, yeah, it's starting to be like that. All right. Another funny test was this one. It's on a cliff and it seems like the zombies is really staying on the floor. I wonder how they, is it only using the scan, the real-time scan to position it correctly on the floor? That would be my question for you guys. Do you think they use this way? Well, if you have the 3D mesh generation, it should be pretty simple in Unity to put an object and make it follow the path on the physical floor. It is an indoor experience. It's not that surprising. But yeah, the fact that it's working in the forest outside is to my, for me, it's very interesting to see that it can scan as well as indoor, as outdoor, because we know that especially in the forest, there are lots of elements and still the light as well. So yeah, I guess it proves that the mesh generation of your environment is very efficient and the object can be put on this and stays on it as while it is still, while it is physicalized in probably Unity, I guess. Yeah, I can see if there is some occlusion as well. Not on the trees. It doesn't seem on the trees, but when they are inside of the building, it seems like there was. No, because it's moving very fast and I'm not sure I can capture all the trees. Yeah, it's a very scary experience. Fabien, do you have something to say? Yeah, I think I agree. For theme park and, you know, haunted experiences, that mixed reality is amazing. I think it's very, very easy to scare someone using mixed reality. So I think we can expect a lot of similar experiences in the future. We still have time before Halloween, so... Damn, and this part of the yellow is showing the scan, so yeah. So there should be some occlusion on some of the trees, but maybe not perfect. Yeah, very impressive, the outside scanning. And yeah, the occlusion is much better inside because they get much more nicer polygon. So the question is, is it the native meshing generation or did they succeed in improving it at some point? Because you mentioned that when you're developing with the Apple Vision Pro, it is a very Apple-y way of developing. Is it the same issue that we can have as usual, meaning you don't have access to like anything in the low layer of developing? Did you try this, Fabien, or you just used the plug-and-play, plug-and-dev module for Unity? Yeah, I just used the one for Unity, so indeed that's something we have to dig in. Yeah, because you can dev in Unity or you can dev natively in Xcode, or do you have to go through Unity? No, there is like RealityKit and other SDKs like that that you can use in Xcode. Okay. And yeah, a funny experience also that I wanted to share, something quite scary also, that screens that simulate when people walk on it, like the glass that are breaking and they sound also in sync. So it's a really scary experience for me. I think I would be worried that people jump on top of the cliff. I did this kind of experience as well in lifts, when the floor was screened and at some point the floor is dropping and you can see the lift falling. Yeah, you can see people that have already experienced it. And you don't have the sound here, but really the sound that they made is quite awesome. So that's the first kick where people start to look at their feet and understand that something is going wrong. Yeah, I'm not sure about the... No, the last one I think is a fake one. Well, it's a demo. Some of them may be overplaying, but yeah, we understand the project. And that's it for me. If you want to comment on that, go ahead. No, okay. No. So I have some updates about the AI pin by Humane. You know that we talked about it a few weeks, months back, and we came to the conclusion that maybe it was a scam at some point because you couldn't have the projection and the AI and all those sorts of things inside this kind of device. And there was a live demo of this. It was, I guess, at the Mobile World Congress a few weeks back. And you can see in action with the woman here asking to do a simple conversion. So this is the AI demonstration. And there was some not so good feedback about this video because she asked the question and it's around five to nearly 10 seconds before she gets the answer to her question because she has to speak, the speech has to be recognized, and then the device has to search for the answer and then put it back, yeah, text to speech. So lots of people are complaining or criticizing the device, meaning that this kind of answer, you can get this in less than two seconds with your cell phone. So there's no real point in this. So this is the first one. And then you can have the projection demonstration here. So you can see it's, apparently, it's working. You can have a projection in your hand and there is some gesture recognition as well for you to interact with the device. But once again, the use case they showed was not that meaningful. You can't really see the improvement of this kind of device compared to a cell phone or other device that we are used to. And the thing that is, maybe it's making us having more question that is answering right now because she's mentioning that the projection that they are doing is laser-based. So it seems to be a laser projector inside of it. But as we are looking at the side of this, if it is really autonomous, I can't see how they can have a laser projector, a gesture recognition, a touch control, and the AI, because they are telling us that this can be used without a cell phone. It can be used completely standalone. And all this can work in this very small form factor. So I'm very, very, very not suspicious, but I can't understand how it works because we know that a laser projection, even if it's based on a monochrome projection, you still have a lot of heat. It needs some fans and it is very power consumption. It needs a lot of power consumption. So we can have some doubt on how it works because we can't see the person here installing the pin. We don't know how it is pinned to our jacket because when you see the device itself, they are never showing really how it is in the back. And we don't know how it can be stuck on your jacket. So I'm feeling that at some point, there is some battery or something else inside our jacket and it can snap on it, maybe with some magnet or so. So still some question about this. And the main, I guess the main conclusion to this demonstration is that people just lost hope in this device. And there are lots of rumors that are implying that the companies tried to be bought at some point and it didn't work. And now they have to release the real product and they are not that into it because there were some layoffs as well in the company this past week. So not really in the state of mind. And so the company is not really in a very huge success, very huge launch of their product. They are not communicating a lot about this. So we are feeling that there is not so much motivation about the launch of the product. So not sure about the future of this. So what do you think about these guys? Well, I don't have much more comments than you on the hardware itself and on the form factor. I also saw that the UI seems to be, sorry, not UI, the user experience seems to be really complex. Like just to send an SMS, you have to go through a lot of different iterations just for a simple action. One thing though, on the concept itself of AI driven gadgets, I kind of updated my opinion about them because I had the luck to try the Meta Ray-Ban glasses, which are kind of similar in the concept. It's like you talk to a device that replies to you. So the Meta glasses are tied to the smartphone. But my experience was actually pretty good. Like, hey Meta, take a picture. So I kind of updated a bit into, okay, I'm a bit more fan of these devices if they work nicely. But as you were saying, like, if it's just providing the same feature as I have on my smartphone, but in a worse user experience, then yeah, that doesn't make sense. So, yeah. Yeah, I agree with you, Fabien. All those needs to be faster and nicer to interact with than having to take your smartphone. Otherwise, we will keep the same way of interacting with the device. We take our smartphone and type on it directly. Yeah, this is always the same message. It has to be useful or it has to create, if it is less powerful than the tool we have, it must provide something else that is bringing, it should bring something to the table that we don't have if they want us to make the effort to go back in some way in the UX. Meaning that if you have to wait five seconds, the experience has to be to the roof for us to be able to use that. I guess it was the same thing at some point that then the, you know, the touch, when the smartphones became tactile at some point, we knew that for the old guys like us, we knew that the resisting touch or the capacitive touch was not that great back in the days. And at the beginning, we were faster with the keyboard than with the touch. And we had to make some sacrifice to use this. But the UX globally was better. So, we make some, we adopted and we were patient for the touch to be as good as it is now, which were really not the case, which was really not the case. Okay, guys. So, do you have anything more to add? Yeah, I would say also on this topic, even the keyboard has changed and you have different options on your smartphone, depending on what you prefer, on the way you type. And I think here for the AI, it would be more complex, but the AI needs to understand how it wants to interact with it and adapt to you. So, I think it is a case like when you are using Copilot, for example, after a few weeks, it understands how you are working and it adapts its way of... So, this is one question we have about especially the training part for you to prompt or to interact with the AI, because as it is evolving or adapting to you, especially, it's very hard for you to give some advice or best practices, because it is evolving to you. So, you can have a training for the first few days or weeks of using it, and then it will be modeled to what you're doing with this. So, very interesting to see that this kind of feature are not that... We don't have much communication about this adaptive way of using Copilot, but this is something that you can easily understand when you are using it on a daily basis. So, the UI are already doing this. Just one last thing. You mentioned the way that we had to adapt to touch, and with a few weeks in the Apple Vision Pro, we definitely have to do the same with the eye and pinch. There are a lot of misclicks and scrolls that go too far. So, there is definitely training in there as well for us. And misinteraction also? Do you have that? Yeah, it picks your hand doing that because you have a gesture, but it was not to do any interaction, and then it does an interaction. Yeah, okay. Yeah, or I stay stuck in sliding, and if my fingers are not touching anymore, it still slides, so I go to the next page instead of... So, yeah, there are a lot of frustrations there. Is it because you are tired at some point, and maybe you are not synchronized between your eyes and your clicks anymore, or you are not paying attention to what you're doing at some point, and then it gets all messed up? There is definitely a synchronization issue. Yeah, definitely. Okay. I had one last subject, Guillaume, if you want to share my screen, and it's about AI and Vision Pro, a test that NVIDIA released with this company called Katana that has done some tests with Vision Pro using NVIDIA Omniverse with their new release. They were able to create an app where you can stream the content directly in the Vision Pro and have some interaction where you can change the background, change the environment around your car, for example, and jump in and sit inside your living room, all stream via NVIDIA Omniverse system. Okay, so this is a... Their conference was yesterday, like 12 hours ago, so I did not look at all. They announced the GTC 2024, but I will dig into it for next week. So, this is a rendering streaming... That's what they announced. Yeah, that's what they announced. So, meaning that they could display way more triangles that we are doing right now. Yeah, and get better performances. So, that could be a way to really increase the quality of the experience if they are able to do that. We'll see if we have to go to Omniverse or if other companies release something for us to do this kind of experience. So, the other question about this is, is the Wi-Fi or air connection is fast enough to display large-scale models? Yeah, I know, Wi-Fi 6 on the Quest 3 for pass-through experiences is not enough. They don't treat the video on the headset when they do this. They are streaming the video from your cameras from the headset to the PC and stream back the video, so you get really low performance and it's unusable. Maybe with the Apple Vision Pro and the specific chipset for the chipset for all this part, treating all this part locally on the headset, maybe the experience is better. Okay. Fabien, do you have something to say about this now? So, this is it. Do you have still something in your bag for us? No, that's it. Okay. So, that's it for today. Thank you, guys, for this episode and we'll see you next week because we don't have our change next week. So, we'll see you, guys. See you, guys. Thanks. Bye.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}