Welcome to episode 58 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we'll discuss the latest news of the immersive industry. Fabien, Seb, welcome. Fabien, if you want to start as usual, what is your topic for today? I have two topics for today. One is kind of a follow-up from last week, where there was a lot of announcement around AI, and the other one is on maybe a new device from Meta. Let's start with the AI topic. Last week, we discussed about how OpenAI made a lot of announcements, especially the GPT-4.0 model, just before Google I.O., Google conference. What actually happened during the Google I.O.? Actually, a lot. Just to put that in context, Google, they do this kind of conference only once a year, but OpenAI, they do announcements as products are released, so they do a lot more. It's kind of not surprising to see a lot of announcements once for Google. Basically, to summarize, they announced pretty much the same as OpenAI. Especially, if you can see this video here, it's very similar to the demo that OpenAI did during their release. You have your phone assistant that analyses in real-time the video that he's seeing, and you can discuss and interact with him, her, like it. I don't know. What should we say for an AI? The capabilities seem to be pretty similar. Of course, it's impossible to say what happens in real until we all have our hands on it. But, and I will skip ahead. So, this is the printer. You see here, if the video actually works, you will see that the user switches to smart glasses, which I don't know which model it is, but the AI model also actually works using the glasses. Try that again. So, it's called Project Astra. So, just before that, there seems to be some kind of memory and spatial proposition, because the user asks where are my glasses, and the AI replies with the location of the glasses. So, yeah, that's just the biggest thing that we have anymore. Like Gemini Nano, that model will be embedded on the phone itself, so there won't be any need for cloud connection. Video generation, similar to OpenAI Sora, called VEO. Dell E competitor called Imagine 3. So, yeah, Gemini, the name of Google AI, will have access to Gmail, Google Doc, the pictures. It will be also in the Google itself. Helping for search engine. So, yeah, basically a lot of announcements. Yeah, I will stop there, and we can discuss this first part. Maybe Seb, as usual, what do you think? Sure. So, yeah, it was expected that they'd showcase something similar than what ChatGPT 4.0 was showing just before. Interesting to see the glasses, yeah, option, which makes more sense here. Rather than having your phone always filming and holding your phone in front of you to film and ask the AI to do something, if you wear glasses, it's much more easier to interact with. And I think that they talk also about the project Elia, which is the code name for the AR glasses. So, they did not talk that much about it, but it seems that in their tool set for this. So, we'll see when they announce it, really. But, yeah, it's interesting that they're going the same direction, and the capabilities seem very similar. So, now we need to get our hands into it to really have some comparison in terms of performances and behavior. But, yeah, interesting. Yeah. Guillaume? Yeah, just about this AI assistant, and to make a little note about the ChatGPT 4.0, I don't know if you noticed, but the voice of the AI assistant is way more, it's way better on the 4.0 than the Gemini. You know that the voice still looks like or sounds like an AI on the Google side, and not that much on the 4.0 side. Just as a funny note, I don't know if you noticed, but the voice presented during the 4.0 conference really much sounded like Scarlett Johansson, and they had to cancel this voice because it was too much like the real voice. So, it was indeed an Easter egg for the movie Her, and, yeah, there was some complaint, and this voice is not available anymore. So, it's a funny side of this. But, yeah, as you mentioned, the 4.0 seems more advanced, but I really like the use case with the smart glasses because, as we mentioned last week, the smartphone is now outdated for this kind of usage, and is it the comeback of the Google Glass number three at some point? I don't know, but it's a shame because it's exactly what they needed at this point, and we know that they canceled their project like two years ago now, and, yeah, bad timing, but they can still put it back, put it out from their drawers, and find a new way of setting it. But, yeah, that's pretty much it about the Google side, but on the Gemini note, you already have access to Gemini inside YouTube, Google travel, and so on. If you're in the new version of Chrome, if you type the arrow base inside the search bar, you can ask Gemini directly, and it sends you directly to the Gemini page, and inside of this, you have access to all these ecosystems. So, I didn't try it yet. I have this on my schedule for today, so I'll give you my impression of what it can do, and what it does to your Google ecosystem. Does it create your playlist on YouTube if you want, or does it create your routes, and so on, if you are planning to travel somewhere? So, very interesting to see how they managed to bring all this in the Gemini ecosystem, but once again, I found, I find it very scary to know that an AI can have access to my email, that they are able to access my email. Okay. So, I have even scarier for you. So, I don't have it here on the share screen, but today, or yesterday, depending on where you are on Earth, Microsoft announced an AI-powered computer that will take, it's literally that, they take screenshots, and it's powered with OpenAI GPT-4, 4.0, I don't know, but they regularly take screenshots in the background, and you can search and get knowledge from all of these screenshots, like the AI will gather knowledge of what you did on your computer. So, it's supposed to be all local on the computer, but, you know, it's a bit scary. Okay. So, do you have anything more about this topic, Fabien? Yeah, just a real quick note is the, for us software developers, the naming convention for GPT are very confusing. They are not naming it with versions, so like Gemini is like 1.5 now, and maybe 2 and some, they name it with regards to the actual capabilities of the model. So, GPT-4 and GPT-4.0 are basically, it's a different model, just different versions, but it's basically the same capabilities. So, if they release GPT-5, that means that they really made a huge gap into the actual capabilities of what the AI is able to do. It's not just like a version update. So, yeah, I wanted to mention that quickly. So, Seb, it's your turn. Seb? Yes. So, today on my side, I wanted to talk about also the Google IO event where they announced that they're releasing the ability on Google Map to have AI augmented reality content being displayed. So, they're releasing it right now in two locations, Singapore and in Paris, and they are releasing the tool for anyone to create some content. So, this is the version, the video that they shared for Singapore, where a different location, where you use Google Map to find your location. You can use the snap, the augmented reality buttons to really display some augmented reality content on top of augmented reality content on top of buildings in your environment or in a park. And there are several locations that are already augmented and being developed with Singapore. So, this is for Singapore, and there is a video also for Paris. So, this is all, of course, simulated, but that's what you are supposed to be able to see. So, I will check it out. So, yeah, I wanted to mention that quickly. So, I'm going to show you a little bit of what's going on right now. But that's what you are supposed to be able to see. So, I will check it out when I go to Paris. And so, at location, when you look at specific buildings, like here, the Eiffel Tower, you can discover how it looks in 1900. And this is quite interesting. You can share your location and share your content, and people in Street View can see also the augmented reality content. And the way to release that is to use Adobe IO and Unity directly to produce the import your GLTF content, and you can have interaction like sound and just buttons like pop-up and information that are displayed. So, quite simple animation. But you can prepare them and show them now how they are – how you are able to share that to all the users. I did not check it out right now, so I need to dig in. But it seems that you can send a QR code and send that to a person. So, quite an interesting feature, and quite great that it's going to be available inside the default Google Map application. It's going to open up a lot of new content to be developed. What do you think of that, Fabien? Yeah, thanks. It's very, very nice to see Google do that move. I mean, they had the platform, you know, the location, the camera, the view, the geolocation, so I think it was quote-unquote easy for them to move into that area. I'm wondering how this will be used first by – it seems like it's a lot of culture, art. I don't think we mentioned that before in the podcast, but the Google Arts and Culture platform is actually amazing. You have a lot of content, a lot of games for people of any age to play and to learn more about arts and culture, and it seems like they leverage this. So, is this platform's goal is to be a cultural and hospitality platform, or will they also accept brand experience in it? I don't know. That would be interesting if they do. But yeah, I'm pretty interested by this, yeah. Yeah, on my side. All right. So, the other topic I wanted to talk about is this paper from the University of Chicago, from Udai Tanaka. Sorry if I don't pronounce his name correctly, but they are showing a new device that helps the user to gather some feedback, but directly in the brain without equipping him with anything on his hand. And with the magnetic transcranial simulation, from the impulsion of a magnetic signal inside the brain, so it seems like quite an amazing job. Right now, it's kind of big and needs to be made really much smaller to be usable, but if it works, that's a huge improvement on how the technology works and how we can provide feedback to the user. So, Fabien, I don't know if you saw that and if you have any feedback on that. Maybe Guillaume, over to you. Yeah, sorry, we have some sound issue. We can't hear one another. But yeah, just before about the Google Maps, their announcement is not that big, meaning that those technologies were announced last year at the Google I.O. already. I guess the new part is about the street view that is now augmented. And yeah, maybe also the ability for us to get it more easier in the Google Maps instead of all the dev and stuff you had to do last year. Okay, so this was my intake about the Google Maps AR part. On this, of course, very intriguing. It's a bit scary as well if you want to have a magnetic wave sent inside your brain. But despite this detail, of course, it is very interesting to see that we would be able to stimulate haptic feedback through our heads, meaning that we won't have any more exoskeleton and very heavy equipment for us to be able to feel and have this forced back effect. So yeah, that's it. Yeah. I'm wondering, are we seeing the first steps with this kind of device or neural link into the real simulation hypothesis where we just plug the brain and we are into a complete hyper-realistic virtual reality? It's a bit scary, yeah. And I agree with you, it's still a very impressive technology. Seb? Yes, so that's it for me. I agree with you, it's impressive. I can't wait to have more feedback around that and how the people are feeling and stuff. Maybe not testing it the first, because I don't know how it can impact the brain, but let's see how it works first. But it's an amazing progress if it works, as they are showcasing it. And that's it for me. Okay, so I'll go on with my topic, which is a bit controversial for, I think. Let's go, let's dive in. So I would like to talk about... Why did I have my desktop here? I don't know if you can see my screen, it seems to be black for me. So I won't have the share screen for this, sorry about this. But yeah, I wanted to talk about an article that I've been sharing a lot lately about the Stanford University that is quote-unquote able to have an AR holographic display with the side of normal glasses. In this article, you have all the buzzword keywords that are very effective right now. You have holographic, you have augmented reality, you have artificial intelligence, stereoscopic. Well, it's all a bunch of buzzwords in this article. And when you are reading it in detail, you can see that they're announcing small normal glasses like able to do AR, but you don't have any information about the resolution, the field of view or whatever. They are just comparing themselves to what seems to be the norm now, meaning the video pass-through. So they are just saying that instead of using the video pass-through, they are now giving us the possibility to see through the glass. They are completely eluding the HoloLens 2, the Magic Leap and all this technology as well. So it's not a very honest article, I think. Once again, you have pictures of the small glasses, but behind that, you know that you have a whole device for them to be able to display on the glasses. And they are using the waveguide technology that is already used in the HoloLens 2 and the Magic Leap as well. They are just introducing the fact that it's nanotech. So is it another buzzword or is it a real technology? We don't really know. So be very cautious with this kind of article, because once again, the media ecosystem is very emotional. So be very, very cautious with what you can read, and especially the headlines. Always read the article, to be honest. Yeah. So they are not even... I'm sorry, I didn't look at the article, even if I saw it. I had some skepticism as well. They don't even showcase this technology, right? It's just like... Yeah. Okay. Yeah. I get the feeling that it's a bit similar to what happened in the lens. I think one year ago, a lot of article about... And a lot of companies saying that they can do... Or they are developing or they're getting towards the technology that will allow AR contact lens. And yeah, it feels like, oh, we did this cool technology, and then there is a huge gap between this cool technology and the actual product that we'll be able to wear. That's the feeling I have, but it's just a feeling because I didn't have a look at the article. Yeah. Seb? Yeah. Yes. Yeah. Like you said, Fabien, it's... Most of the time, it's on the manufacturing part that is starting to block because they managed to make one true device, one prototype, or two prototypes, but then to really industrialize the process to make one true glasses that is buyable by someone without costing too much. Then it starts to be a funny process that takes a long time. Plus, they're only talking about the display, then you have all the sensors and all the other things to mount around it and calibrate. So, it's still maybe 10 years before we see this being integrated into devices. Now, we can be surprised and the company can take the lead quickly on that. We'll see, I guess. Yeah. And I think you mentioned that to me, Seb, a couple of months back is... I forgot the name of the device, but there was one device where you put your iPhone and it's kind of like with a mirror. And the main issue that you've seen is the latency between the reality and the virtual, which is not a problem on the Quest or Apple Vision Pro because they can delay the virtual because they can delay the video. But here, there is also the rendering problem, right? Yeah, exactly. With your lens, you managed to get something quite impressive, but it's low-end in terms of what you can display and the quality of the color and stuff like that. But yet, to be able to be as fast as you move, it needs to be a very fast device, fast process to get your positioning and your location. And for me, only Magic Leap 2 and HoloLens are managing that correctly. Other devices that are really doing see-through right now are not working that fast. Okay, guys. So, I guess this is it for today. I hope we don't have much... Sorry. Yeah, sorry about that. Yeah, I have a small thing about the meta. That's okay. Yeah. Yeah. So, I saw this today, actually a couple of hours ago on Twitter about the MetaQuest 3S, which is supposed to be the cheaper version of the MetaQuest. And so, it seems like it's a lack of blend between some parts of the Quest 2 and some parts and the feature of the Quest 3. We have this kind of three lenses at the front with a tracking camera, pass-through camera. They will use the Quest 3 controllers. Yeah. So, it's very... Until they actually release it, it's very difficult to know if it's actually accurate. But yeah, to me, it seems like just a cheaper version of the Quest 3 and not that much more. I don't know. What do you think, Seb? Well, we'll see if it's the final release, if it's really accurate or not. But for me, it's strange that they go back to Fresnel lenses. I would have thought that at least this part would be updated for the Quest 3S, for the smaller end version device that they wanted to do. But that's what really increased the experience for me compared to the Fresnel lenses. When you start to use the pancake lenses, you don't want to go back to the Fresnel one. And with their three positions, if they really do that, if it's only three positions that you can make, then also having Fresnel lenses, it's an issue because you won't be able to adjust it perfectly in the center of your eyes. So, you will lose a bit of quality if you are a bit outside the three positions that I provided. So, I don't know if it will help a lot increasing the number of people using regularly VR device. And same for the pass-through camera. If it's under the quality that we have with the Quest 3, that doesn't, for me, it's not that much usable. Okay. Guillaume? Yeah, I guess this is the second or third time that we can see this design. So, I guess we are close to what it will be. I would be very cautious about the spec. As Seb mentioned, they are quite surprising. It's more step back than, or maybe, yeah, it's a very cheap version of the MetaQuest 3. And I don't really know what kind of audience would buy this because the Quest 2 is very cheap. And the only thing that would be better with this is the color pass-through. So, with a worse resolution than what it is on the MetaQuest 3. So, for those who really want to have this video see-through, pass-through experience, I guess they already bought the Quest 3. So, the only way of you getting this kind of device would be if it was the same price as a Quest 2 right now. And I don't think this would be the case. So, yeah, I don't really know what they are trying to do with this. We know that Meta, sometimes, they are getting lost in their different suite of devices. So, I don't know. I'm really skeptical about this one. Okay, guys. So, I guess this is it for today. I hope we won't have this, our sound issue that we have right now because we can't hear one another. So, it's a very, yeah, it's a special way of doing it. So, see you guys next week for another episode. And, yeah, see you. Bye. Thanks. Bye.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}