Welcome to episode 59 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello, guys. I hope you had a good week. And what are the news you would like to share with us? Fabien? Okay. So we have two news today. One which is actually a couple of weeks old, which is the automatic room scanning feature on MetaQuest 3. So a couple of weeks back, we showcased how the Vision Pro looks at the space where it's in. So we look at the mesh and how it looks like and the automatic detection of surfaces. And it seems like the MetaQuest has the same feature. And actually, I checked on my headset, which is up-to-date, and I don't have this feature. So I'm not really sure how it's rolled out or is it like randomly rolled out to select the users. So as you can see here, there is a feature. So you can see here how the user just has to walk around. So this part with the mesh is really similar to the one that you can see, I guess. But on top of it, there's a detection. So you can see the doors, windows, table, couch. It seems to be automatic in this new update. And what seems to be really, really good, because it's really cool to have this feature, but what's the impact of it, right? What is said below is that the table is automatic. It can be used automatically as a desk in workrooms. And here it says, couch. Just use of this automatic detection mode. And yeah. So that's it for this new update. I'm really looking forward to try it myself. And I think, Seb, what do you think? I don't think you have it on yours, right? Now that you mention it, I think I tested it with, but it needs to be implemented inside your game. But they released like one month ago an escape game sample template, where the first step is to scan a room and that position different things in your environment for the escape game. So I do not remember having the information about what was scanned and what was a couch, what was a TV. I think in the implementation, they only scanned the wall and the environment to get shapes. So boxes. So to avoid objects being displayed automatically dropped into space for the escape game. But it was working quite great, actually. Yeah. But it's not like a default thing that appears directly inside the default view where you can select your app, you know? It needs to be implemented inside the application, from what I understood. Yeah. Here, they say that it's from the settings of the quest. And yeah, I'm sure there is a step that needs to be done in each game. What do you think, Guillaume? Not much to say. I don't have the Quest 3. So I'll trust you on this one. But yeah, in terms of functionality, that's something that was already available on the HoloLens for a long time. And it was implemented in the first game that was released on the first HoloLens, which was quite amazing at the time. And the way the scan is being done and the mesh is being rendered and the segmentation of the different things, how it works, it was kind of the same in the HoloLens. So it's great that it's coming to the Quest 3 because that's really helpful to make mixed reality game that will fit your environment and be usable. Otherwise, you need a specific room that you configure every time the same way. And you are not compatible with all the different space that the user has to play with. Yeah, something I'm wondering as well is how big space that can be scanned. Like in a house, in a room, it seems to be fine. But can we scan a warehouse or some larger areas like that? Something to try. In the flat I am right now, I did like a 50, 60 meter square space, which was scanned correctly. But then the issue is that the boundaries that are displayed by default are still limiting you in terms of space that you can move in inside the game. But now there is an option to deactivate that, but you have every time you launch the game to go to the menu and set the boundary to be off. So it's kind of painful and not automatic. If I can remove that needs now that have this functionality, that could be great. So we can really move in a big space in mixed reality. Okay, cool. Anything to add on that topic? No, just I need to do a test in a bigger space to see how much it can scan. Yeah. Okay. Yeah, I will test again in a couple of days if I have that update. The other topic I had for today very quickly is I got my hands on this device, which is like this, and it's a brainwave sensor. Let me share my screen. Okay. So you see in real time, it's actually very, very easy to use. There are SDKs and libraries online to use. And it's using Bluetooth to connect to the device. So, yeah, the original goal of this kind of device is to use, for example, for focus on meditation. And there is an app on the smartphone where you do a meditation session and you have kind of real-time feedback to know if the meditation that you are doing is going well. But I think there are also some other interesting features like usage for art installation or things like that, where we can use the brainwave as an input. So, you know, for example, here, you see I'm just pressing my hand on the desk. And you can see how it changes the wave. So it's actually, or if I blink very quickly. So, you know, I'm just starting to explore this, but I think it can be used for pretty interesting stuff. Oh, and so it's pretty basic, but you have like the head position. And I got disconnected. Well, anyway, it's very light. So it's very easy to put on. So I'm starting to quite like this to see and to explore this, see how we can use it. And it's supposed to be also like a heart rate sensor. So we can use this as well. So, yeah, it's a bit of a gadget for now, but I'm curious to know, to discuss with you what you think of this. And start with you, Seb, as usual. Yeah, I wonder if there is any model or specific information that you can get. We saw that there is like four different information and with line that you get information from. Yeah, I wonder how accurate and what does it mean? Does it mean it can different part of the brain? And it means that you are searching specific part of your brain, like sentences or ideas or creative ideas, or you are doing an action like you were saying, you're pressing a button or blinking your eyes. Is that like, is there any AI model that can scan that and identify which kind of action you are doing or? I'm sure there is. Yeah. Yeah. It needs to be here. It's like the raw data. So, you know, it's like alpha waves and stuff like that, that I actually need to do some research on it to, to be able to actually use it. But yeah, it's very raw data. Yeah. But that's very interesting. Yeah. Yeah. Do you have access to four brainwaves or do you have more than this? Cause I, if I can remember correctly, the BCI we use like for just, you know, use it as an input for up, down, left, right. I guess you needed like 12 different measures to, to be able to achieve this kind of stuff. So do you have access for, of only four like this, like the, the, the, the interface you, you presented, or do you have more than that? As far as I know. Yeah. Only is for. Okay. So, yeah, this is fun because it reminds me of, I don't know if you can remember, but it reminds me of the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, I don't know if you can remember as a project mile, which was a wrist band for analyzing, the muscle contraction. And for you to be able to create a whole new interactive device. And when we reached, we received it, we only had access to the road data as well. And when we ask for more, it just said that. Yeah. The, the upper layer of interaction and that is, it would gather our data. And maybe the future would have these better interface or better knowledge of the data that they are receiving. And unfortunately we never had this because the company just closed. I think this is Google or Meta. They are working on something like this. And we know that as you mentioned, creating a layer of AI that would be able to, I don't know, predict or know what you're doing is a very long road to go to, to go on. So we'll see maybe the community would be effective on this one. You didn't mention the price of this, of this device. It's around, I'm doing the conversion from yen to USD. It's, I think it's about 300 USD. Okay. So it's not, yeah, it's not just a gadget. It's a bit more than that. And do you, do you try this couple with the VR on VR headsets? Do you have the space to, to wear both? That's a good question. I mean, we can try it, right? Just like here at the Vision Pro, live demo. I guess I need to put it after. I think it would be a bit difficult. I have to put it like inside. You have pressure just around the ears, I guess. And some sensors are here. So I need to be really like pressed on the, on the, behind the ears. And I need to test for the quest reactors. Maybe it's a bit, it will be easier. But the, the form factor, which is very small for, for what I'm looking at, I guess it would be easy. Easy. It would be an easy integration inside the strap. If at some point, some metal Apple would like to implement this. We know that Apple is trying to get this brainwave. Data from the iPods. I don't know what the name Apple's Apple's. Yeah. Sorry. So we'll see if they can get this as well. Yep. Just imagine the amount of work to also train the model for each person. Cause everyone has different brainwaves. Yeah. Yeah. The calibration process BCI is always a pain. Very painful. Yeah. It's going to be another bargain. Yeah. Okay. So do you have another topic Fabian? No, that's it for me. Thanks. Okay. So Seb, it's your turn. All right. So today I want to talk a little bit more about the, the, the, that Microsoft did last week. And when they announced the copilot plus PC. Models that they are releasing. So the new surface PC. Which will have a NPU. Chip. To. To. To. To. To. To. To. To. To. To. To it. To. To. To it. So they used Wireless chip to cover the AI or the AI part of the copilot system, that is. That will be compatible with all the Microsoft application. We'd be embedded inside. The PC. And they also announced that those PC will be equipped with. Qualcomm snap dragon X adipts chipset. To. I think they are trying to compete a lot. And that's what I'm mentioning during the conferences to compete with the Mac. M3. Computer. And I'm sorry about the. And they. They showcase different use case. And I think that was quite kind of interesting. Is that. The. They asked a copilot to, to speak about what to do in the, in the game. And it was a Minecraft game. And so the. Copilot was reacting to what was display on screen and. Guiding the user on what to do. In real time. And the same kind of vibes in terms of emotion and. Speed of, of reaction. So. To see here. There is. Becoming and the. Oh, run away. There is. Coming run faster. And. And this is only one of the use case. So. Inside the embedded inside the PC with a specific chips. Chipset to, to. The a part. I think would be a game changer. So. I'm happy to see your, your. Your input in that. And what, what you think about that? I don't know. If you want to stop. Yeah, sure. So first of all, we. We were expecting that. Devices would be no more a few weeks back. And now we are just discovering that the surface is just becoming. AI and powered. So it's less a surprise. Because for them to be completely. Discarding this. This line of products would have been very, very strange. So it's a bit reassuring. To be able to. To, to take a screenshot of what you're doing every 15 seconds. And then analyze the context of your. Behavior of your work. And the, I would be able to help you. So it's a master. Clip. Guy. That would be able to help you through. Through your activities. Of course, they are saying that these data are kept. Locally and they won't be sent. To them for any kind of data training. So we give you. The choice of trusting that or not, but. What is very interesting is a Minecraft part. Because it confirms in some way. The prediction that, you know, the. Microsoft has released the Microsoft mesh. Which is a virtual reality. Application in teams. And. I really believe that the next step in mesh is for you to have. For you to be able to interact and work or whatever. And the, the fact that they can do that in Minecraft. Really. Confirm that it would be, it would be available. Through copilot in a near future, I guess. So. It's very interesting to see that they are. I guess the key is really the ecosystem. We can see that with Google. And Gemini, which is now completely. Able to go through all the different application that are. Own by Google and at the same time, So you can confer lots and lots of data, Generally, In the shared cloud networks and it's really a host of data. I guess. So more information, To go through all the different application that are owned by Google. So, Yeah, And we saw that with Apple as well. So the key of the success with the use of AI is really having, Having a broad extended ecosystem. So what do you think about this? Yeah. So, I think, So I have a couple of things that come up. First is, So with Gemini, They announced the Gemini, Forgot the name. Nano, Maybe that will leave and stay on the phone. And suppose not to use the entire connection. Here. I'm really curious to know how, How it works. You know, if that's a screenshot feature that you talked about. They say that it stays on the computer. So do they have like a. A smaller version of GPT. On the, on the embedded on the. On the computer. Or is there like a preprocess to anonymize that? And then it goes to the cloud. I'm not sure. To be an interesting feature. If they managed to have like a. Yeah. An embedded version of GPT. And. I guess. So Microsoft did something. Google did something. The next is. Apple. So what is the next. What, what is the, what will be sorry. The equivalent of that for. For iOS and macOS. I think. We can get a hint. From the open AI and it's meant a couple of weeks back. They showcase the macOS app. And. You know, someone. Similar to this can assist into what you are doing on your Mac. And actually I've seen the app available today for me. Yeah. Yeah. I downloaded it. The new models is not available yet, but the app actually. Yeah. He's actually there. So, and I know that. There was some rumors between Apple and open AI as well. So. Would it come to us? Maybe. Yeah. It will be very interesting to. Yeah. You know. Like very. Important privacy stuff for people working in like defense or governments on. Whatever. And also our own privacy. So, yeah. All right. Yeah. On the, on the PC directly is the. Is to have GPT. For all running directly on the computer and being fast. In terms of. We not depend on the internet connection and quality of connection to, to answer to you the fastest as possible. But now, like you said, we don't know. Yeah. What they would say. So. We see what kind of option there is also true to limit. What kind of data. Sending. I'll send back to them. I guess that would be very careful about that. Cause there will be. Watch this. This kind of. Of things. The other thing I want you to talk about is the first kind of. That is available now for the vision. Which is the meal. It was already available on the quest. The vice and quest tree. So it's a mixed reality application. Well, it's a role play game. Well, you can play. With a different person and. Yeah, play a real role play game where you throw. You throw your. I don't have the name guys. Dice. On the table and depending on the dice number that you get, you can. You are able to do different action. So it's such a nice thing to see, cause. Also displaying the fact that you can look and play with a tablet. So an iPad tablet at the same time as a user using a vision. So they already thought about. Not having a lot of vision pro device next to each other to. To play. One rich friend. But yeah, it's nice to see that. Trying to. Put game on the Apple store for the vision pro. So we'll see if that's. Forward and if it's. Yeah. So, yeah, I would be keen to know. How it looks in the device. And I will be curious. If you have the chance, because you have the vision pro. To, to maybe buy it and test it. To see how it looks, if it's usable, if it's nice. Because of course the video here, it's a trader. So it looks perfect. But I will. I will be keen to know. Yeah, I. So. Having tested division pro for quite a few months now. So yeah. Looking at something and pinching is great. Up. Until you need to do something fast. And. So I guess. For now, until the vision pro has like controllers. It will be only like slow paced. Games. So I think this one seems to be a good fit for this kind of. Of user interface. Because. Yeah. If you have, I don't know. Yeah. As I mentioned a very fast, fast pace. Movements to do. That could be very difficult. Without controllers. Or you do it on the iPad. But. Yeah, I will. I will try it. See how it looks. Seems to look really cool. As you are testing this. I know that the. One of. Maybe the first triple A games. Made by Marvel. The what if. Application would be released in two days. Yeah. So. Yeah. I would like to get your hands on this as well. If I. Understand correctly. Can. Understand. The whole environment. And adapt. The AR view. Regarding. Where you are and what you are. Looking at. So. Very interesting to see these. Maybe. Higher level app. Maybe. What they can do. When they, when you have a budget. Because we know that Disney. And Apple and Marvel, they all have. This. Partnership. So it should be. One of the great hits. And as you mentioned said, but I guess this is. The kind of application they want to push. For people to adapt. Adopt. This device. So we'll see. What they can. Create with this. All right. Thanks. So. Up to you now. Yeah. Yeah. So. For me. Once again, my. Share screen is. Is it working? Yeah. I would like to. The. To the object. And. And. The 3. Download. Inside. Yeah. Very fast. The only. The. Can see. The. On the surface. It's. It's. The picture. All the colors on it. It's. The other thing. Texture. Several. This is where you can do something. The scale. It's very easy. First step. Into this. To create. It's more. For assets. Which can be. Time consuming. You can have a very. It's. 3D generation that was released. Like last week, which is cool. Cats 3. Research. The same. That you have. The 3. Through is. The. The. I found the. You can. You can use those 3. Inside your VR app as well. We are seeing a lot of improvement. The last. Weeks. So, I guess. So, yeah. What's your. Last. That's quite. Quite amazing. Just. Of course, sometimes. Objects that. I would say well known. So the. So. Something in the. In the. I will set the database of objects. I suppose where the AI will be trained on. So everyday life objects. It's works really well. It's. It's. To achieve the end results. So just as you mentioned right now. The 3D reconstruction and instead of creating. Using. Technology to. To do the. It's. It's really. I think one hint into the. So, yeah. Pretty cool. Pretty cool. Just. Just to say that. The. The key to up. There's no GitHub project for the cats. Yet. So. They are just showcasing their technology. A bit like. Most of the big players. They are just showcasing their paper. And they are not giving us. The code or anything to try on our own. So we. We keep an eye on it and maybe we'll get. We'll get the ability to test this. Yeah. Yeah. Like you said. Very interesting to see how the emerging. How emerging the. AI. They can be used to. To. To improve the. The result. And I'm wondering with the cat 3D. If there is any. Prompt. Ability to modify. The. Picture. To get. Different 3D. Look. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. So. Yeah. So. The. Picture to get. Different 3D. Luke. Of your output. There are no information about this. But just on the side note. About. Innovation and progress. We now know that for the SORA. You know, the air movie AI generator. Is known now for the 2D generation. You can do the same. With your video, meaning that you, you just have to. To tag or to. I like the parts of the video you would like to change. And you can generate only regenerate. Only the part that you are, you want it to change. So. Also very, very powerful because. We all know that it would have been one of the downside of the. Video generation. Because it's a very long process. And having to place is. Russian roulette. Of generating movies and, and so on. You would, you would have been very time consuming. And. And costly as well. So. Yeah, the ability to, to change some details. And to generate it. Content is it's a creative movement. And I'm very surprised that they are releasing it. Already. It's it's also, they are learning from what they are doing in the 2D world. And they are just. Pushing it in. In the movie one. So very interesting to see that. As you mentioned Fabian, they are. They are. They are. Understanding of what AI is doing and they are taking. The good part of every. Different project and putting them together to, to have very. Very interesting results. Very fast. So there are no. Yeah. Very, very interesting to see that it's. Helping each other and to see that. Yeah. So any last comment news or. No Fabian said. More tests to do. I guess. Some things to do. As always. Okay. So that's it for today. We'll see you guys next week for another episode of lost in emotion. Thanks.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}