Welcome to episode 64 of Lost in Immersion, your weekly 45-minute stream about innovation, as VR and AR veterans, we will discuss the latest news of the immersive industry. Let's go guys. Hello, so what's your topic for today Fabien? Okay, so today I want to talk about Cloud, the AI model from Anthropic and the latest version, so 3.5 Sonnet. I'm not sure how we say it, but Sonnet. Anyway, so what you can see here right now is actually something that I generated using Cloud, so one of the major upgrades is that they have this feature called Artifacts, so similar to ChatGPT, you can actually generate code, but you can also execute the code and have a preview of that code in real time and ask changes and it will update the code and execute it again. So actually what I tried is to do a summary of the new Cloud feature using Cloud. So the first thing that I did is I just, you know, sent the URL of the news website, but as you can see, Cloud cannot access directly content from the web. So what I did instead is I just printed the web page into a PDF and asked Cloud to generate some slides, HTML slides, using the PDF and in just a couple of seconds generated five slides of summary on the PDF. And so I uploaded our logo and I asked can you use the color palette from the logo as the style for the slides and it did work pretty well. So there was, as you can see here, I didn't see any changes, so I had to dig a little bit asking more and I asked also to put the title at the top and here we have, you know, something that is working and generated completely using Cloud. So you see it's faster than Cloud 3, advanced coding, better humor, we have to test that, this will be fun, and better vision capabilities. So I think you can see from the analysis of the logo, the colors are pretty well taken from that image, so I think it's pretty cool. I saw also, you know, some demo like graphic analysis, like if you have a graph, Cloud seems to be really, really good at at testing that. So yeah, all of this is actually pretty impressive and just to give you a sense of how fast it is, I have here the .json, a .json file that contains the titles of all our podcast episodes. I can just, you know, ask him to create a simple website using that .json. I ask him to use only the latest 10 episodes and you see, let's see what it gives us. So he's replying, generating some HTML code, and in just a few seconds, we will have a preview. Okay, so here you go, you have the link to watch on YouTube, the links that I put in the .json, so pretty cool, and similar to what I did previously, I will upload the logo and I asked to use a color palette. Let's see, let's see what it does. So I found it pretty, he's asking if he's using the same color palette as before. So one thing that I've noted when comparing ChatGPT with Cloud is that the answers and wording style that Cloud is using looks much more natural than ChatGPT. You know, ChatGPT uses a lot of, like, convoluted words and very complex phrase, sentence, stuff like that. Cloud is much more, like, direct and feels more like a natural conversation. Oh, well, didn't work as expected. This is a demo effect. Yeah, let's see if we can correct that. So anyway, we don't need to spend too much time on this demo, but I really like that new feature. I think it's really something that sets Cloud apart from ChatGPT and the other. Of course, here, the demo is pretty simple, but you can go pretty far into, as I mentioned before, like analysis of graph content for ChatGPT. It can go pretty far, image analysis and stuff like that, to help into, like, understand data in a much easier way. So, yeah. Oh, here we go. I don't really like the colors, but anyway, we have a very simple website in, like, two minutes. So, yeah. What do you think, Seb? Yeah, pretty impressive. I wonder if you looked at the code and if it's the way you will code it. In terms of structure and how it's, yeah, coded. Is it the way you would do that, or you will not do it this way at all? So, here, it's pretty simple. It's just a plain HTML file. What I will ask, and maybe we can ask it, is to load the JSON from, instead of using, like, static data, to load the JSON. But as far as I understand, the artifacts cannot access the internet, which is, I think, a good thing for security. So, it won't work here. But if I copy-paste the code, it will work on the website. So, apart from that, you know, here, the ask is pretty simple. So, yeah. In terms of recognition of picture, if you draw something on the paper and put it as the template that you want for the structure of the website, is it something that is capable of doing? You were saying that it was capable of analyzing some graph, but is it capable also of recognizing shapes and structure to generate a web page that will look similar to your drawing? Yeah, I think so. I didn't test it, but from what I saw in the examples and other people testing it, yeah, should be able to do it. Nice. Okay, Guillaume? Yeah, I have two things. The first one you didn't mention is that it's able to generate 3DS file. I saw someone trying to create a Doom-like 3D game through Cloud, and it worked. It was just square and color square, color boxes, but the player was able to move around at something like, yeah, it was just a box, but it was like the weapon. It can shoot around, and it has enemies as well. Oh, you have the example, I guess. But it's very interesting to see it can generate 3D inside this artifact context, and even if it's still very rough or very simple, it gives you the starting point for possibly immersive interaction if you want to go that way. So I guess this is the only one AI engine that can do this. I can't recall ChatGPT being able to generate 3DS and being able to display it. So very interesting to see, and especially what it will do for us in the future. As we already mentioned, the WebVR, WebAR, WebXR are something that should be very interesting in the future. So if AI is able to generate this kind of content, of course, we'll get it. And it will be very interesting to follow. And as every single new field of application of AI is still at the very early stage, so we can hope that in a few weeks or months, it will be something that is, well, earth-breaking. But yeah, very interesting to see that. And the second thing I would like to discuss with you is that, I don't know if you followed the roadmap of Cloud, because they released three different AI just a few weeks back, and they have this progression curve of their AI. And this one, the 3.5 SONET, or SONE, as you mentioned, we don't know really how to say it, but it's like if they trained an AI, and at some point, they just discovered that it was better than the other one. So if you get the curve of the Cloud roadmap, you can see that this 3.5 is just way higher than it should have been in the progression of the AI. So at this point, I'm just asking myself if there is a part of luck in their data training, and if it was just a surprise that it was that good at this point. So I don't know what you are thinking about this, guys. Yeah, thank you for mentioning that, because I saw it. And indeed, it's very interesting. And actually, it's a bit scary. Up to now, I don't think we, meaning the industry, and the researchers, and the people at these AI labs, they don't really know what happens to their models. So yeah, it's a bit scary to see that something like kind of emerged much more higher than expected. Yeah, and we have the same issue like with Stable Difference 3. I don't know if you saw that, but the latest model, it has been banned from CVTI, just because the results are not good. And it can be dangerous at some point for copyrights and user protection. So they prefer to ban the entire AI models from their platform. So at some point, we are just asking ourselves if they are not just playing mad scientists at this point, doing stuff and not mastering them, or knowing what it will do at the end. Yeah. Well, there are a lot of worries around. I mean, maybe you already know this, like a lot of worries about AI going rogue, from people who say, like Ian Lequin at Meta, that says that AI will be, we will be able to control AI. To other people like Sam Altman, by the way, that says, maybe to make the value of his product higher, we don't know, that AI will be very, very powerful. Yeah, it's a, it's a weird industry. And if you add on top of that all the environment issue that it's causing, because of the power consumption that it's taking the world, that's, yeah, that's a world, a weird time to live around that. Yeah. So as we were talking, you see, I've asked to add a ground and put some physics on it, and to bind the space key to make it jump. So it works. Well, it's... Nice. Yeah, so anyway, in short, cloud is very, very promising with this kind of feature. So we will surely follow the developments on it. Sam, your turn. Nice. Yes. So today I wanted to talk about inside that released a device that you can include inside your Quest 2, your Quest 3 to do eye tracking. So it's plugged as a USB-C, but you have a pass-through USB-C, so you can still recharge your device at the same time. And it's a photo sensor that uses infrared light to look at your eyes and see where they are positioned in space. And it allows them to do some 4-variety rendering. They apparently are not as precise as what the other headsets are providing, so you won't be able to be precise in looking at a button and launching it by looking with your eyes. But the latency is one millisecond, so it's the best latency that is out there right now. And for 4-variety rendering, you don't need to be as accurate as pointing to a button. So for 4-variety rendering, that seems really amazing. And on top of that, it consumes five times less power than the traditional eye sensor. And it costs $160, so kind of a good price, I think, for this kind of improved capability for the headset. Although, all the games would need to be developed specifically with the SDK from InsideEye to add this functionality to the game or to the game engine. To the game or to the experience you want to develop. And here, there is a picture showing how it works. Basically, the sensors that are around the eyes are tracking the kind of the white area to see where your eye is not. So that's why I guess it's not as precise as a camera looking directly at your eyes. All right, I don't know if you have any thoughts about that, Guillaume? Yeah, it's really interesting to see that the LED ring around your eyes is not complete. It's just the bottom half-ish. And compared to what we are used to with the maybe higher-end eye tracking, it's very interesting to see it's just like 10 LED per eye. And yeah, it seems a very simple, very efficient way of doing eye tracking. But I guess if the precision is not that good, the main usage is 4D rendering because we can't do much than that. This is why it is cheap as well, I guess. So is it worth the price to get for this, just the rendering optimization? I don't know. They are showcasing their product with the Quest 2. Do you think it is compatible with the Quest 3 as well? It is, yeah. This is an old video. They did not release one for the Quest 3 yet. They just announced the product, but they don't have a marketing video yet. But yeah, it's planned for the Quest 3 and Quest 2. Okay, yeah. Well, we'll see if people are really eager to have this eye tracking or not because when they released the Quest 3, everyone was like screaming because the eye tracking was not there anymore. So maybe this solution can bring everyone back to this technology, we'll see. It will depend on the SDK also, how easy it is to implement it. And I was wondering if it's going to improve the pass-through experiences also, the MR experiences. Is it possible to have only where you are looking with the high resolution of your augmented reality object? Yeah, to get this result, they should have a partnership with Meta. I don't know if they have one as well already or not because it should be integrated directly in the firmware for maximum compatibility. Cool, yeah. I don't have much to add. One thing is, do you know how, like, is it going through the head face mount or like is it plugged to the USB? It's plugged to the USB, yeah. There's a cable going on the side and I'm going to the USB-C. Port that is on the side of the Quest. Okay. Like this. Oh, okay. So I don't know how comfortable it is. Maybe it will be just next to your eyes here and pinch a bit the foam. We'll see. But yeah, interesting to see the five-time power consumption seems to be an interesting way of doing it. And if it could be embedded in next generation of headset, maybe that will lower the cost and still have even a better frame rate. Because I think the most issue I had with testing that in the Quest Pro, for example, was the latency when I was testing it. Testing that in the Quest Pro, for example, was the latency when I was looking at something. There was a bit of latency and that's really not comfortable. So if they manage really to have a one millisecond delay, maybe that can improve a lot the experience. And so the other news is this headset announced by YXR, the Play for Dream MR, which is a new Chinese headset based to be a concurrent to the Vision Pro. It contains the Snap Doragon XR2 Plus Gen 2, so the new generation of Qualcomm's Vision Pro. New generation of Qualcomm chipset. It's supposed to be released in October. It will cost around $2k. It will not be available in Europe. They announced 4k per eye, so a total 8k resolution. They seem to have seven sensors and 22 LED lights for the space tracking. And from the first test that I've seen, the guy that is testing it right now, he said that the pass-through quality is much more higher than the Vision Pro, in his opinion. So yeah, quite an impressive announcement. Right now, he was saying that he did not have any hand tracking and he had to use controllers that were not the final one. So they're still doing some work on that and the controller will be sold separately. Quite a nice announcement. The weight seems to be the same as the Vision Pro. However, they balanced it with having the battery on the back. So the guy is saying that it's way more comfortable than the Vision Pro and having everything on the front. And it seems like I'm enjoying a lot the way the strap is made. So yeah, Guillaume, if you have any comments? Yeah, it is maybe the Apple Vision Pro we all dreamed of. But we have to be careful with the numbers and reviews and stuff like that. But yeah, I saw this. But at some point, I just saw the rendering of the headset and I thought it was some kind of a scam or a fake Apple Vision Pro copy. But yeah, it seems to be something real. And yeah, I'm very curious to know how it will work because the great strength of the Apple Vision Pro was the ecosystem. We already talked about this and the spatial computing. I'm curious to know what kind of platform this headset is built on. Android. Android? Okay. Yes. And it seems they have some sharing screen here. They are showcasing a usage with a computer using Windows. So. Okay. So yeah, we'll see what kind of integration it is. And if the price can go a bit lower, like $1,500, it would be great. I guess the $2,000 mark is still quite expensive. So we'll see how it goes. Fabien? Yeah, pretty interesting. We'll see if it's actually released and how good it is when it's released. And I'm really curious. He says like the path through is better than the Vision Pro. And I mean, that's a very high bar to pass. So well, from the footage of the headset, it seems like it's actually pretty good. But it would be really, really interesting to see if they can actually surpass the Vision Pro. Maybe, you know, by removing the blur. So sorry, on the Vision Pro, when you move your head, especially in mid to low light, the video is very blurry. So maybe that's something that's one way to improve it. So we'll see. I guess the other obvious things to improve the path through is that apparently the camera are well, better placed on the headset because you don't have the screen for the eyes. So maybe they just adjust the camera like directly in front of your eyes. And obviously, it will improve the distortion and stuff like that. So one easy, quick fix. So yeah. Yeah. And so Seb, you mentioned it, the way that it fits on the head, I think it's really natural to put. So I really like this as well. And the fact that it's Android, so hopefully more open. Sorry, hopefully it's like very, they don't have a lot of add-on on top of Android. So it would be more open, accessible for us. So yeah. Cool. Right. So that's it for me. Okay. So last news for this episode is the announcement that Meta did. But what is, what are your expectations about this? Fabien? A lot actually. You know, since we started the podcast, I think we are talking a lot about AR glasses and how, you know, I think you mentioned that Guillaume assisted really. Since I've tried the Ray-Ban one, which I was surprised by how easy they are to use. So of course, obviously this one will be heavier and maybe like a smaller. One thing that is, I think, will be, I think, critical into the adoption is how wide the field of view will be. Like, if you have like a very, very small. Like, remember how we say like the approach and strategy was when you put it, it's wow. You're like, they put everything on having the best quality possible. Maybe Meta is also, as you said, like having the same strategy or not. When you put them, it's like, wow. Let's be hopeful. It's what they should do if they want to get their hands back in the market and be the leader again. So we'll see. But it's a see-through project, right? Yep. So they will compete with the Ray-Bans too. The project Nazari was one. So we have to wait, but I don't think they will be going the video see-through, pass-through way. So maybe this is a great question. It is a see-through or pass-through, we'll see. Yeah, that would be the main question. Normally it should be a see-through. There is not a lot of options out there right now. The Olens 2 seems that it will be discontinued this year. The Magic Leap 2 will be the only one left to do this kind of experiences. And it's still a big headset and you don't have a wide field of view with it. So having another option, if it's smaller, better implemented and with faster GPU and CPU capability, that could be great. Yeah, as you mentioned, it's very curious to see if they are going the see-through way because it's clearly not the best way to go right now. As you mentioned, because to our knowledge, I guess, there isn't a new technology in the market that could be solving the field of view and the color matching and all this. We know that the tracking is better. The global recognition or understanding of the surrounding is better as well. But yeah, on the display side, I can't see how they could do this wow effect when you're putting the glasses, especially if people tried the Apple Vision Pro and are now able to understand what the pass-through is doing. So yeah, maybe they have something very secret, very innovative. Yeah, we can still dream. But yeah, we'll see if they are going the pass-through way. But it's completely... It's not respecting the vision of creating a simple pair of glasses like the Ray-Ban one. Because if you are going on the video pass-through way, you'll have the complete view blocked. So yeah, very interesting to see how they are managing to merge all these messages and visions as well into something that should be... That hopefully is a great success. But yeah, lots of tricks and traps to get there. And it's like tomorrow, it's on the fall, it's just a few months ahead. So they have to be very confident about that project. And they seem to also work on the Quest Pro 2. There is some news around there that they may release it soon also and maybe use the... I don't remember the name of the project, Fabien, that you tested, you know, with the one with the farifocal, mechanical farifocal... I don't remember the name. But it was catchy. Yeah. So there is some... So there is some news about maybe adding that inside the Quest Pro 2 also. So we'll see what they announce. Indeed, but they are going in all directions differently. Okay, so yeah, Fabien? Yeah, since we are talking about meta, so I don't have it yet, but the V67 update, if I'm correct on the number, seems to add the ability to put windows, like to do special computing actually in the Quest. So I don't know. Stay tuned. Hopefully, we'll have the updates next week or in the following weeks. Otherwise, we can just do a review of what's on the internet. But yeah, pretty curious about that as well. You just have to ask Claude what he's thinking about this. Okay, guys, so I guess this is it for today. And see you guys next week for another episode of Lost in Immersion. Thank you, guys. Thanks. Bye. Have a good week.