Welcome to episode 69 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello, guys. Hey. So, what do you have to say during this quiet summer? Okay, so today I think we already talked a bit about how brain-computer interface can have really nice usage combined with virtual reality headsets. And actually one company called Synchron, I'm not really sure how to say it, but they, for quite many years now, they developed a way to capture comments from the brain. And actually what they are using is kind of a different approach from Neuralink. Neuralink is an actual electrode that are implanted into the brain. Here, what they are doing, and we see it, let me move a bit in the video. Yeah, so here we can see they are passing through the vein up into the brain and the device is not inside the brain itself, but inside one of the biggest blood vessel that is at the top of the head. And from that, they are capable of getting information from the brain back to some transmitter device that is on the chest and then back into a computer. So, they have this kind of interface that you can see here on the computer. And so, the user is capable of selecting and doing clicks, like kind of controlling a mouse using his brain. And then, what they are showcasing is how they are using the similar interface to simulate the pinch. That is the way that one can select things in Apple Vision Pro. In this case, this user, he has control over his eyes. So, he can look around and control the eyes direction in the Vision Pro, but he cannot use his hands. So, he is using that brain computer interface to actually do clicks. And so, it is nice to see that he can engage with text messaging because everything in the Vision Pro is done from the eyes and the pinch. So, he can basically do everything. So, I mentioned text messaging. They showcase game. They showcase also watching movies a bit later. Yeah, here. So, movies or news. One thing that I'm wondering is how they actually did that to how they controlled the pinch in the Vision Pro. I don't know. Maybe they have a partnership with Apple that Apple gave them more like admin access to the Vision Pro. I'm not sure how that exactly works. But anyway, I think it's a really nice usage of this brain computer interface and how that can, you know, empower some people to do more with these devices. And I wanted to mention something else. I'm forgetting right now. So, I will hand it over to you guys to get your impressions on that. Start with you, Seb. Yes. I have a question on how it behaves when you type on keyboard. Do you have to look at the key you want to press and then think I want to pinch now and it's letter by letter? Okay. Yes. And you have also like as you can see here at the bottom of the screen, you have like auto completion. So, you know, he doesn't have to write the whole word. You know, with the auto completion, you can select the words, you know, as you do on your phone. And to do the selection, how does it work? You just normally tap longer at a certain point to launch the functionality? It's like time sensitive? So, in the Vision Pro, the keyboard is you look and you pinch. So, everything happens like that with this. Yeah, but you mentioned selection. Selection is working like you do this and you wait for a longer time steady and then you try. Oh, sorry. Maybe I misspoke. I didn't see any like text selection. Okay. Yeah. But yeah, it's nice to see it's progressing this way. Now, it's for me just a trigger. So, making more complex. I would expect to be more than that and maybe enable the fact that you want to launch the Netflix application or directly by naming it or thinking about it. But there is voice command also on the Vision Pro, right? Yes. Yeah, you have Siri. Yeah. So, combining the three, yeah, it's provided nicer interaction that maybe having to press everything. But now, if you need surgery to be able to do it, that's. And it's only do that. That's, yeah, that's a lot to do. But I guess it's progressing and we foresee a lot of progress in this domain in the future. Yeah. And so, now, I remember what I wanted to say. So, as you might have seen at the beginning, the device is quite big. In the blood vessel, it's like 10 millimeters. So, of course, they have kind of a limited range of areas that they can capture. So, it's one big limitation when comparing to Neuralink, for example, that they can go much deeper. So, of course, here, the surgery is much, much lighter. They say in their website that people usually can walk in to this kind of operation. So, it seems to be quite light. And they also mentioned on their website that they are currently doing research into doing more smaller devices so they can access other areas of the brain. So, yeah. As you said, Seb, for now, it's limited a bit. It's just a click. But I think, you know, hopefully, this kind of technologies will greatly improve. Guillaume? Yes, I guess you asked my question, meaning that I was wondering which part of true and false was presented in this video. But apparently, you're mentioning that people are getting in and are getting this wire in their brain. So, it must be operational. And, yeah, I guess their goal is to have like – they would like to recreate what we are doing now with the BCI, meaning that we have this helmet with lots of captors all around the head. And they would like to have a bunch of wires into different veins in different parts of the brain to get all the information, I guess. So, my other question is what do you do with the wire? Is it like in your brain and coming out at some point and you plug it in inside a box on your chest, as you mentioned? So, if you have several cables, you have like a lot of cables to put in your box. Obviously, it's easier to implement that neural link. But, yeah, I'm not sure about the use case at the end. Yeah, I'm just wondering and what could be the future of people getting this in their vein inside their head? Because we know that the bloodstream doesn't really like pouring objects or stuff inside. So, they would be doing some either protection or covering the wire with tissues. So, yeah, I don't know. Not sure about this one. Yeah, they seem to still have a lot of unknowns as to how to scale it up and what will be the end goal of it. Yeah, and they mentioned safety. Yeah, and they mentioned safety. I think they really focus on something that can be done quickly and without invasive surgery. So, maybe this is their product placement, I would say. And you can see here on the video the size of the, I don't know what it is, like the sensor or the outside sensor that is placed on his chest. Are those people with disabilities? Yes. Okay. Yeah, he has ALS, so he cannot move his hands. So, you cannot type or use a mouse. And as usual, I think we already mentioned it, but there are also some ethical considerations. Like if they implant this product and it will improve the user's lives, what happens if the company is closing and what's going on with their devices? You know, all these kind of obligations that such a company could have. I actually don't know. I should do a bit of research into how they are starting to have regulations around BCI. I'm sure they have. Okay. Anything else on this one? No. Okay. All right. All right. So, today I wanted to talk about this AI device, this product that is now sold and starting to sell, which is basically... Well, for me, it's weird, so I will describe it this way. Maybe you will... ...talking to a friend, listening to whatever you're doing, and then it reacts to what you do and sends you a message on your phone. So, you interact with someone that doesn't exist, that listens to you all the time, and that reacts to what you are doing. And from their video, like reinforce your confidence on what you are doing and just commenting what your daily life is like here. He's being trashed by other players in the room because he's not playing well at the video games they are playing at. And the AI is sending him this message... ...just to have fun with the guy. But here, the lady is looking at the video and she's saying, wow, it's nice. I don't know why people are saying that it's not a good TV show. And the AI is replying this, this show is completely underrated. So, just confirming what you are doing, what you are saying. So, for me, it's a bit like a weird addicting AI device that people could get into, which doesn't help that much. But I don't know what is your thought on that. Guillaume, maybe? Yeah, I guess it is still the wave of personal assistant we've seen with... ...not so successful, I guess the rabbit and the other one I already lost, the AI pin. So, yeah, they are trying to create new devices for something that is not that well implemented yet. I'm not sure about the fact that you are receiving text messages. Normally, it should have been something that talks to you, talks back to you. Instead of having this, it breaks the experience to my sense. So, maybe you just want to have like classic interaction that you would have with a friend. But, yeah, once again, the idea is here, meaning that we all know that the future of AI is for us having like a personal assistant, personal coach or whatever. That is helping us in our daily lives. Oh, yeah, it is quite expensive as well. It is quite expensive as well. But I'm afraid also that it would take us individual more isolated because we know that people are more and more... ...they are finding harder to find friends now. And this kind of object won't help the issue here because people will have virtual friend with AI. There is this old trend as well of AI profile with AI. I guess we talked about it, but yes, it was Fabien with Meta AI Studio. When you can create your own AI bot, we can easily project ourselves that this AI bot could be integrated with this kind of object. Maybe you can integrate several AI with this device and you have a bunch of friends talking with you through text messages regarding what you are doing in your daily lives. I'm not sure AI should be just here for entertaining or create virtual social links. I'm seeing AI as something for us to be brought up. Meaning that it should give us more information, more knowledge for us to be better. Not just having this casual information or just validating every point you are doing in your life. But we are on the sociological path here. It's way more like a gadget here. It's lacking features that could make it interesting. But as we know, conversational AI is just at the beginning. We just have this real-time capability that are releasing now. So we just have to wait a few more weeks or months to have this personal assistant as we imagine it. On that point, I saw that the GPT 4.0 conversational feature is rolling out to some users. I don't have it yet, but maybe we'll have it next week. I completely agree with what you said, both of you. AI should help us maybe connect more together. And not replace the... I think the happiness improves at first, but it can plateau. And the study didn't even look at the long-term effects of this kind of interaction. I want an AI to maybe help me work faster so I can have more free time to discuss with my friends. Yeah, not a fake friend that confirms everything that you do, even if it's bullshit. Like here, the sample, she's saying that she could do this kind of art herself. And after that, she doped over, always pressing the button here, making sure the device is listening to what she says. And she's saying, maybe I'm too confident in myself. I should not go into this and I should respect the art of the person. And the AI is replying, no, you are just self-aware of you. So, no, you are good and you could do that, trust yourself. So it's a weird way of presenting the psychological companion that will confirm everything you do and everything you say. That's very weird for me. Anyway, moving on. The other thing that I saw this week is that there has been a leak from the NVIDIA AI team that revealed that they are using a lot of YouTube videos to train the AI model. And apparently, from the Slack exchange that has been between employees that has been leaked, there is a lot of... We will see later if there is an issue. For now, use it. We need to move on and we'll see if one day we have some issue with the license around that and YouTube not agreeing on this use of the YouTube videos. So, yeah, that's the biggest company and we start to see that they are using everything that they can. They are even mentioning using Netflix movie or other content from other platforms as a source for the AI model. So we start to see that it's not coming from nowhere and that it's using really licensing or public videos that are available on YouTube to train the model. So I believe that we will come to an issue at one point, but I don't know what is your thought on that. Guillaume? Yeah, the other company, Runway, with their Gen 3 model, they have exactly the same issue. They found out through a whistleblower that they were using like NVIDIA, the YouTube video and licensed video as well for their video generating model. And there will be some consequences in the upcoming months. I guess there are some legal actions that were taken against them. So I guess NVIDIA will have the same. We've seen that it's the same on the audio part with the UDO, SUNO application that are sued by the majors right now because they are using their licensed catalog for training. And this is always the same issue right now. We don't know if the generated content through video or audio or picture as well. What part is really created and which part is just copying from the data they have in their database, learning database. So till we have the answer to this question, I guess we will have models that we'd be able to create from scratch or with just a little bit of what they have in their database. So I guess the legal part will take much longer than what the technology is capable of. So I'm not really sure we'll have the answer to this, but we can see here the real problem is that by us always wanting better AI model, the company have to cross some borders or some lines for them to have bigger data sets for training. Because the public part is not enough. So they are just taking everything they can to make it better. And this is basically what OpenAI did back in the day when they were just taking everything they could from Twitter and all social network for them to have the most training data as possible. And now bigger company are trying to put some limits because they want to make some cash out of it, of course. So if you want to train on our data, you have to pay and it would probably in the near future slow down, I guess, the training and then the evolution of AI because they would have to pay to get their hands on those data that they can just... I don't know if stealing is the right word, but yeah, taking without permission. Fabien, up to you. Yeah. So I, on that same subject, I found an article from April, so I don't know what the status on that right now, but there was an interview of OpenAI CTO and she said that... So, well, she kind of avoided the question, but basically she's not sure if Sora, so the video model from OpenAI was trained on YouTube or not. So, you know, she kind of avoided the question, but maybe they will have a similar problem on Sora as well. So, yeah, as you said, with this kind of training models is why you need data to train on. So where does it come from? Do you generate it? They cannot like, you know, just go outside and record hundreds of millions of videos. So it's the easy way indeed to get them from something that already exists. Makes me wonder where Meta is taking their sources. Well, I think, you know, in the fine prints, when we created our accounts on Instagram and Facebook, maybe... You allowed for us to use all your content. Oh, by the way, sorry, I saw something similar on Slack. There is some consent on some kind of data. So it's not, I don't remember exactly, I can search that for next week, but it's not the messages themselves, but they will use some kind of data to train some AIs. So I will search and look back on that next week. And about the chat GPT 4.0, they are releasing it for special user in July, August. So right now, and you should be able to get this in October from my latest news for all the paying user. Okay. And my next topic was the SAM2 model that was released by Meta last week, I think. And with a lot of showcase in the video. So it's a segment anything's model, the second version, and which showcased the fact that it's really, really powerful. I actually did some tests on a video I did myself. I'll show it to you after. But yeah, it's opened a lot of possibilities. So here it's a video I've taken at a paradise, which is close to my place in Belgium. And I just had to click on the three objects here, the two rocks and the leopard. And it was directly selected and tracked over the video. And on this, there's a couple of options here, which has basic, but you can see the potential using like comfy wire to replace everything, but keep the leopard in the model. So, yeah, very, very powerful. I don't know your thoughts on that Guillaume. Yeah, I did some tests as well on this because I'm very impressed with this image processing stuff that I used to do, not manually, but through very hard and very heavy computing algorithm. And now through AI, it's a bit easier. I would say. Yeah, I'm very curious to know what people will do with this. Is this powerful enough to be used on the professional side or is just for like normal people that would get access to this kind of tools that were reserved or used by professional tool to this day. So, yeah, I guess we'd have to pass this wow effect, meaning that, yeah, it works great. You can do lots of objects at the same time. There are not much of you can see on the rock here at the bottom. You have some flickering and this kind of stuff can be used on a professional level. So, I guess you would have to improve more once again. Maybe it's Sam 3, but it opened the eye of the global user to know what it's capable of now. And it's. Yeah. What it opens up also is to train the model more easily, just clicking on one object. It will track it through the video. You would say, OK, so this and it will learn to your AI model what it is and being able to regenerate it more easily. Yeah. And as it is free, so you have this interface, but you can download the project as well. So, I guess, yeah, the community will embrace it. And I'm curious to see what people will do with this, because this is on the video part, like classical one that you have example of doing this in the MetaQuest 3 as well. So, what could you do in mixed reality with this kind of segmentation? We've seen some of this with I guess they are using this for the object recognition. But, yeah, what can we do in mixed reality with this kind of algorithm? And they're curious to know what to see what people are going to do with this. Fabien? Yeah, actually, what you just said about the Quest 3, something that I had on my mind as well is I really see applications if this can work real time on a headset or on the potential glasses that will be announced later this year. Indeed, that can give more, quote unquote, intelligence to the device. You remember, I think we talked about this example many times, like where are my keys? So, I think maybe it separates using a vision model, but, you know, recognizing tables, recognizing doors or creating interesting games with this kind of real time experiences that can be really great. And, yeah, as you say, maybe it can be used in movie production as well if the quality is high enough. Even for occlusion of augmented reality experiences, like you said, if it's running live. Yeah, for making experience that looks more ensured in your space because you are able to really have shadows on the object that are next to the object or even be able to grab an object and see something coming out because it knows it's a cup of tea, for example. That could open up some experiences that could be nice, yeah. All right. And that's it for me. Guillaume, do you want to switch to your subject? Yes, I do. So, well, it would be about Meta as well on the financial part. So, maybe you have seen they released their financial report like on Wednesday last week. And we found out that they lost 4.5 billion on their innovation and especially in the Metaverse part of their business. And, well, they're still very confident about what they're doing. And they announced that with the help of AI, all this Metaverse work will make sense and that they will have some revenue next year. So, it's basically the same speech as usual. And as their global financial results are not that bad, I guess, yeah, it doesn't really matter. But if they had this 4.5 billion on the plus side, they would have really great results. So, we'll see what they'll do. Are they still working on this? And one fun part is that during the presentation, they said that they are aware that Facebook is now a social network for old people. But they are getting it, the Facebook Meta trademark is becoming more and more, is becoming younger through Marketplace and MetaHorizon. So, very fun to see that only some parts of the Meta ecosystem is bringing youth and the other one is for old people like us. So, this is about Meta. I have this news as well. Tools, despite the fact that it's way too expensive for lots of projects. But the fun part is that with this partnership, they are providing some kind of building blocks or templates for professionals to have analytics of what people are doing, especially on the advertisement, augmented reality advertisement. So, it will, at last, it will finally bring us information and KPI about how augmented reality is performing, especially on the marketing and communication side. So, we will finally have numbers. I just hope that they will communicate about this. But yeah, it's very interesting to see that we can have some calculation and specifically return on investment. And hopefully, it will bring more people to the table and increase the augmented reality use, especially in marketing and communication. And the last one is that Xreal. Thank you for that. But I'm very curious to know exactly what Xreal is. I don't know if it's still there. I don't know if it's still there. I don't know the exact numbers of sales. But if they are keeping, creating new devices, I guess there's something there. And they should be making some money. Otherwise, they would keep the same product for several years. So, about all this information, guys, what are your thoughts? Well, I think for the meta and metaverse, reality labs lost, I would say. I think it's interesting to put that in perspective with companies, for example, that are doing AI. Like with the huge, huge investments and money that is going into these companies. And not a lot of them are making money yet. Or maybe not enough to compensate for the funding. So, indeed, it's a bit worrying that meta is not making more money out of their huge investment that they are making. So, maybe they had very high hopes into the Quest 3. I don't know, actually. It's been a while since I've looked at the sales of the Quest 3. It could be interesting to see how it is soon. It will soon be a year after the release. So, it could be nice to have a look at that. And, yeah, it seems like on the other topic, on the AR glasses, we don't really know much about what's new about these glasses. They are just more performant. Yeah, we don't have much specification about these. It's just the announcement of the glasses. Yeah. Okay. Okay. I think last winter, I went to an event where XRoll was present. And there was also Lenovo and other hardware providers. And XRoll was the one with the longest queue. People were really looking at this. So, I guess the form factor is really maybe a key decision for people who are not really into VR. A lot of them maybe don't want to put a headset yet. And putting on glasses, AR glasses is much, it's a smaller step for them. Less embarrassing maybe, I don't know. So, yeah. Yeah. That's it for me, I think. Seb? My experience with XRill, which was called NRill before, is that at their booth, they have most of the time an experience inside closed rooms with a brand ambassador that guides you on the augmented reality experience. Which makes the test longer and you can appreciate more the device itself. And the experience that they deliver is really focused on the best way to demonstrate the product. I tried the same when I had the glasses, the NRill glasses at my place and it was not working that well. So, I think one of their strategies is to showcase it the best way on their booth also. So, that might be why you have a long queue. It's also because you have like 15-minute slot of test of the device. So, of course, at an event like that, that takes much more longer for the people that need to queue and wait for their turn to be able to test the device. But, yeah, I'm really keen also to know what improved in this version compared to the previous one. If it's tracking the environment or if it's just layering something in front of you at a certain distance. So, a 3DOF or a 6DOF device. And how the interaction works because that was one of the main issues that before you had to use your phone and point with your phone at the 3D object in your environment. And the tracking was not great. So, everything was shifting and the phone was not perfectly... The array of interaction that was coming out of your phone was not perfectly aligned. So, it makes the whole thing kind of weird to experience. And it was way different when they did the experience at their booth at CS. It was there working much more, but the place was covered with a lot of printing and a lot of stuff that was helping the tracking. So, yeah. And for meta, they are spending a lot of money, but I think AI and the metaverse are right to think that pushing the AI first, making it better and better, will help a lot the metaverse to generate content. Like we said a lot of times during this podcast. And to bring more people that will be able to look at things nicer and generate stuff themselves that can be exchanged between people. And depending of interest, there will be different places where you can go and work with other person on a subject that interests you. I think that they are right in that decision, but first they need to make the AI model better. So, it makes sense to use it in this kind of environment. And I think that's it. Yeah, one thing that just came up as you were saying, that meta is open sourcing all of these models, all of their models, sorry. So, for example, the SAM that you just showcased. So, they are not making money from them, I think. Or maybe they have like, I don't know, plans for professionals, I'm not sure. But yeah, compared to, you know, the $20 that you need to pay for chat DPT or cloud. I guess like, what is their business model behind that? I'm not sure. Data gathering. Like always, stick to the plan. Exactly. Okay, guys. So, I guess this is it. So, see you next week for another episode of Lost in Immersion. Thanks. See you, guys. I won't be there, but yeah, good luck to you guys. That will be holidays. Again. Yeah.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}