Welcome to episode 14 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. So let's go, Fabien, if you want to start. Yeah, thanks. So this week I want to discuss some news and rumors that are starting to get more and more precise about the next meta headset. So it will be the Quest 3. So it's the low quote-unquote low price range of the meta headsets because the MetaQuest Pro is at a much higher price. So it's set to be released end of this year. And so what we know for now is it should be a smaller form factor, so much more lighter than the Quest 2. There will be color cameras, so it will be a real mixed reality device. And it seems like the color camera are a huge improvement. So I saw someone make a comment that lifelike mixed reality where you can actually use your phone, for example, in mixed reality. So let's see. And this mixed reality mode will be even better because they will add a depth sensor at the front of the device. So instead of using only the RGB color values to map the space, they can also use the depth sensor. So to really map and know where the walls are and such things like that. So that's a good improvement. So I'm curious to know what you think about that. And also, we know that the Quest 2 sold, I think, over 2 million copies. But the usage is actually quite low. So I wonder if this kind of device with the addition of mixed reality can bring more users, more content, and actually have people use it more. Or usage is more about content and we need better content for all kinds of devices. So yeah, maybe Seb, what do you think about the Quest 3? Well, for the improvements, it's nice. Definitely having a depth scan like the HoloLens to track better the space and the environment. And recognize the space and the environment so the headsets know where it is. That's necessary. Having the Quest Pro, I can tell you that it's not working most of the time. So you still have to calibrate every time and it's bothering and breaking the experience. So having that on top of what they already have would be very nice. And also the Quest Pro in terms of mixed reality, with the black and white camera mixed with only one color camera in the center. Make a weird mix. You have a lot of deformation and what you see seems really diminished. You are not able to look at your phone and stuff like that. I tried the Lynx where the Lynx is using two RGB cameras and that was already much better. So if they also increase the camera resolution on top of that, it should be a good mix. Now in terms of usage, like you said, I know there is a lot of mixed reality applications right now. That are meaningful to bring more people to the mix and use it. At least right now. And also the headset is quite big. So even though you use mixed reality with your friends and stuff like that, you will still have a huge headset on your face. So the form factor should be smaller to be used in real space, in other environments than only your place. So I think there is still a step into that to make it more consumer friendly. They say that it should have a smaller form factor but we don't know how much. But you said compared to the Quest 2? Do they say how small will it be compared to the Quest Pro? Which is already smaller? I don't know. What do you think Guillaume? Well, there are a lot of things to say about this. The first thing I saw is about the controllers. They won't have the ring above them like on the Quest 2. And they are talking about passive controllers. Which would mean no batteries in them, I guess. But the question is, what kind of tracking are they willing to apply to this? If you don't have this upper ring that basically was the only way for them to have an efficient hand tracking. Or maybe they are just willing to use the hand tracking. But yeah, it gives us lots of questions to reflect about. Next thing is that I find that there are lots of headsets iteration on the Meta Oculus Camp. We can see that the Oculus Quest 2 and Quest Pro are not that old. We've already seen that the price is going down very, very fast. I don't know how they can do this fast iteration headsets. Because we are more used to 1.5 or 2 years time window of use. And we are less than a year right now. So I don't know if it's a symbol of their financial difficulties. And they are trying to find the best headset to make a huge breakthrough to the population. As you said, Quest 2 is not that used. PSVR 2, the number of headsets is lower than the expectation. So we are still in this financial moment where people are not using their money for leisure or entertainment. So once again, if these headsets are coming with some very interesting use cases for everyday life, I think people could still buy them. On that part, some people are saying that Quest 3 should be a competitor for the Apple AR headsets. Because as you said, they are making all the effort on the video see-through technology. And I'm finding this a bit weird or funny, I don't know. But for the Quest 1, Quest 2, the video was there just for protection as a guardian. And now they are finding out that people are really willing to use the AR capacity of this. And if you are looking on the social media, you can see that the most trending video about the Quest is about the AR uses of this. So maybe there is a switch from the meta team from the VR to the AR. And they are finding out that maybe now they have the technology to provide an efficient video see-through effect. Like the links you talked about, Seb, a few weeks back. Maybe they finally achieved the one-to-one ratio which makes it easy to use. So lots of questions and answers to say. So please tell me your thoughts about it. So I'm curious indeed about the controllers as well. Like really how will that work? I know that meta is big on AI as well. So will they start including some AI interpolation or AI models for example to build like an avatar from only the hands or only the two controllers? That's a question, yeah. And also comparing the potential Apple device with this one, the price tag is really different. So I'm really curious to see what kind of quality comes from both of them. Both on hardware, of course, and as we discussed already many times, contents. Because Apple is already huge, huge on content. So it's easier for them to bring back that content into their device. So yeah, lots of questions. This year will be a very interesting year for XR. One of my fear is that because they want to bring the price lower than it is already, they would like completely flush the quality of the controller tracking. Because if they are passive, I really don't know what kind of technologies you could use. Because it means that you don't have your infrared LEDs inside your controllers. And yeah, it's very, very weird. And no cameras either. Like on the Quest Pro, they are using cameras inside the controllers to control that direction. Like LiveWeave is doing as well with their inside tracking devices. And also removing the false feedback is a bold move, I think. That adds a lot to the interaction. And also they removed from what I have seen the eye tracking and the face tracking. So you will not be able to have expression and show that to other users. Maybe this is a complete switch from VR to the AR. Because this is what people, once again, the AR use cases for most people is easier one to adapt. So maybe they are just targeting this audience. Maybe less games and more everyday life interactions. Yeah, but that seems weird to not have the expression on face tracking or eye tracking. Because you need to convey that even more in augmented reality. You need to see the reaction of the other user next to you. So that seems really weird to remove that functionality. So we will see. It's for the end of the year, I guess. And we will have the, hopefully, April AR Adsense announcements before that. So they could still modify their product by that time, I guess. So do you have anything more to add, Fabien? Or maybe Ced can talk about his own topic. Yeah, I'm good. So today I wanted to talk about NVIDIA that releases its new AI generative character system with a nice video. Showing a human being asking questions in the game to a 3D character. In which voice and text are controlled by AI. So here the user is speaking and Jun is the AI character. So I think it's moving pretty fast also on the game side. It seems like it will bring much more interaction and not predicted interaction with the game and the character in the game. So I think that's good for the genre and the quantity of stuff that will be available inside the game. The ability to really talk to a character and ask for direction and be more free in the game. I don't know what is your thought on that. I'll start if you want. So I saw the video as well. And I'm a bit disappointed, I guess. Because when you're watching this video, in fact the exchange between the real user and the AI NPC is quite common. I saw some comments about this and people say, well yeah, this is exactly the kind of conversation that you are skipping when you're playing. So we don't really see the point in this. I think they are kind of missing the point here for a technical demonstration. They should have asked a completely random question for the NPC to answer. Right now I can't see the difference between what we have now and what they are proposing. Despite the fact that the player is speaking freely instead of checking a pre-programmed dialogue. So this is the first element. And on the technical part, if it's real time, this is very impressive that we are seeing that there is no latency between the time the player is speaking and the time the AI NPC is answering. Because we saw some stuff in the past few weeks and you had less than a minute, but you had some quite latency for you to wait for the answer. And currently there is no time between that anymore. So this is a great point. They should have demonstrated something more open than just a classical conversation. Is it the case though? Is it really completely open and you can ask any question to the user? Or is there a storyboard implemented in the whole game and characters are getting the information from that? I guess it should be completely open because if you are restricting the answer or the spectrum, people will try to answer any kind of weird question and the NPC will always bring you back to the perimeter. And you will lose the immersion very, very fast. So I think they should find a smart way to bring people back through the conversation instead of saying, yeah, you're not talking about the mission, please ask me a question about the mission. But I wonder if the whole point is more to show that you don't need any more graphics that will animate all the face tracking and really have that directly embedded in your game without bothering about that. But you can still bother about the content and the art direction and where you want the user to go next with a scripted storyboard on which the character and the AI are getting the answer to answer the player. Okay. In that way, it makes sense. But once again, the message of the demonstration is not that clear because as you are seeing, I didn't understand what the technical stuff they are willing to show. So I guess the video was also shown at the conference, so they must have talked about it around the video. Yeah, I guess. I'm just showing the video right now. Fabien, any thought? Yeah, so I agree with what you just said. And I think we already talked about how AI and generative AI can increase the life of the characters in the game. And indeed, make the development much faster. Instead of coding every line of speech, just giving a persona to the AI and the AI can use that persona to create a dialogue. But yeah, it's very interesting to see how this will be actually used. Because for example, in an open metaverse where there is no scenario, it could be just a conversation. But I think in a game where the scenario has to be somehow created, as you said, if you ask him, like, I don't know, what did you do for Christmas? How the AI will bring back the conversation to the game narrative? I don't know. But still, it's in progress. And NVIDIA is benefiting a lot from the hype on AI. They have a lot of income thanks to selling the GPUs that are used to train AI. So it's becoming a major player, one of the biggest actually. So it's very, very important, very interesting to see where they are going. Yeah, you're right. We can see that they're for the small, bigger player in the gaffer. And so you can see that NVIDIA is making his way up to the top. And some people were talking about the semi-conductor era in which we are entering with the AI era as well. So it's very, very interesting. And once again, it's been done in a few months or a week because back in 20... you just have to check out the stock market price of the NVIDIA. And it's 10 times bigger than it was in 2021. So it's very, very fast. So next topic, Seb? Yeah, it's about bouncing on the same subject, putting AI in a character, but this time a robot with facial expression. And having him talk and answer to your question with face expression that match what they're answering to you. Same here, I think that's going very fast and the results are quite impressive. Fabien, do you want to start on this? Do you want me to take the positive way or the negative way? Both. No, just joking. So yeah, indeed, it's going very fast. And one thing to mention is this is a cool demo. The time for this to be actually accessible and I don't know, for example, a robot that would be a clerk in a hotel, for example. I think the number of safety issues that can arise from, for example, just using an open AI API without any other layer of control can lead to real issues. I saw, for example, that there is an influencer, I forgot the name, who created like a fake version of herself using open AI APIs and she created a service where people, well, guys, of course, would pay $1 a minute to discuss with that AI and it didn't go well. And the AI started to reply completely strange, weird, false things. But still, yeah, the video that you are currently showing, the expressions, it's a bit uncanny, but it's getting better and better. Yeah, I can't agree more with you, Fabien. I guess the most impressive part of this video is the famous uncanny effect that is quite not there anymore. Because from all the humanoid or robotic face animation that we could see in the past few years, there was still this cringy, very weird phenomenon when you're seeing them. And it's not that weird anymore. You can get used to this face and it's not scary or disturbing anymore. So, good job on that. Yeah, once again, the robotic field is very exciting, but we can see that from the previous video of Boston Dynamics, for example, and when they are selling the product, there is a 20-ish year window. So, we can guess that this kind of robot won't be seen until 2050 or something like that. But we are getting it. We are getting in this sci-fi vision of our world where we will have some AI robots in our lives. Okay, so anything more from you two? No. Okay. Okay, so for starters, I just want to show you a very fun video that the Insta360 team just released. It's a stop-motion video made with a 360 Insta camera. And in fact, the concept is very, very well done. And apparently, it took them one month to create this animation. And I just find it very, very cool. So, I'll just let you see what they've done. They made this kind of gesture, animation, interaction, or navigation inside the 360 space. And why I wanted to show this video is that we talked about this a few weeks back, is that the way people are now using the 360 content is not really what they were thinking of at the beginning. We talked about the extreme sport usage of this 360 camera for them to have more dynamic shots. And we can see that you can do a creative 360 stop-motion video as well. Could you explain what is 360 inside the video? Because right now what you show is only from one point of view, but you are able to rotate the camera, right? Yeah, the camera is rotating as you see. I don't know how they did this kind of effect. I don't know how they did it. I couldn't see any footage of the creation of the content. But you can see that, I guess the 360 part makes it easier to create. You don't have to position the video as precisely as you would have on a classical stop-motion effect. Yeah, and we can see that the field of view also changes, so that's very easy to do if you have a 360 image. At the end, towards the end in the gallery, not the gallery, the alley, there are a lot of changes in the field of view that makes this very cool effect. And that's easy to do with a 360 image. So yeah, that's very cool. Yeah, it's not all the technical complexity of this, it's just the effect that is fun to watch. Yeah, with 360 and NERF, I think creators, video creators and creators more generally have a lot of opportunities for creativity. So, on my second more serious maybe subject is about the Serial Trio. I don't know how they want us to pronounce it, but they just announced a new kind of AR lenses hardware device. And the breakthrough they are talking about is how they can now simulate the field of view of an AR glasses. So, of course, this is one of the main issue using an HoloLens or a MagicLip device, as you just have your AR content at one specific location in the space. So apparently they are claiming that they solve this issue and it's indeed could be considered a breakthrough, given that it answers one of the main issue of the comfort when you are using your AR glasses or devices. I really don't know how they did this, they are just communicating about the effect that they are creating with this lens. I guess there would be some kind of cameras filming your eyes to determine the depth of field you are looking at the object. What you are focusing on here. Yeah. We can see it's still prototype, it's not a form factor that we could use in a real world. But once again, they use some this kind of glasses, classical glasses like form factor representation. And I guess it's motivating people to project themselves by using classical prescription glasses with AR content. We are not there, but we are getting there, step by step. So what's your thoughts about this? So this would be this would be like a lens or something that you put on the eye, or it will be something on the on the glass. Well, this is, you have the glasses, there is an holographic film on the, on the glass. And apparently you're still out some kind of project projection on the side. So it's, it's still not what we are dreaming of, meaning just LCD or whatever device that would be embedded inside the glass. We are still on the projection technology, which is not ideal, we know, but as we can see they are projecting this or using this device in a very light controlled room with very, very low external light so we'll guess that the projection is not working when you are, when you have too much light in the room. So once again, it's a, it's a no go for an AR device on the free market, but yeah. Yeah, I saw before like two years ago they were showing the same technology but inside a huge box. And people were looking inside a small hole to see the same content. And now they managed to get it small enough to be able to do that in a glasses form so it's already quite nice. It seems to be smaller than the other lens which does the same thing. But apparently the same size as the magic clip for me, from what I can see in terms of size in the video. And you said that it was not available before this kind of depth of field. For me the HoloLens 2 is kind of doing it with the eye tracking so it shifted the 3D model and change a bit the way it displayed. So it was a bit, there is some kind of delay that make it not, not perfect, but it's even worse in terms of delay on the magic clip one, I didn't test on the magic clip two but it was the same on the magic clip one you have like three layers. Yes, this is a difference between I guess this technology and the other one is that you just have one or two layers of depth of field and now apparently you can have like way more of this depending on the distance. But yeah, that makes the 3D model more integrated with the environment and more natural in the environment than having a perfect model, completely sharp in terms of display, where you are not looking at it. And, I guess, this kind of technology can also drive the usage up because it's less strain on the eyes so it's more comfortable to use. So for example, what we talked about last week, having for work for example, having AR glasses from morning to night can be really, can be tiresome. So with this kind of technology maybe it can get more comfortable and more easier to use for a long time. The eye fatigue. Yeah. That was my topic. I don't know if you want to add anything more about anything you would like to talk about today. No, I think that was very cool. Okay, Seb. I think the grill has a competitor which is called VividQ, which are doing the same thing and they seem to attack each other kind of on LinkedIn about that news. So it seems like VividQ will announce also something similar soon. Okay, we'll keep an eye on it. It will be our topic for next week. All right. And yeah, that's it for me. Okay, nice.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}