Welcome to episode 20 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. So let's go, guys. One at a time. Okay. Thanks. So today, my topic is this new collaboration between a headset and a company called OpenBCI. BCI is Brain Computer Interface. And it's called Galea. Galea. I don't know how to pronounce it. But as you can see, it's a VR headset that has been augmented kind of with a lot of sensors to access actually quite a lot of things. So if we dive a bit into it. So there are sensors for brain waves that you can see here. They have like also motor sensors. So they can look at the way that the muscles of the face are contracting. They have heart rate sensors as well. And so there are like a full array of sensors. And on top of that, they are using the Varjo headset. So they have also eye tracking available from that headset. So it's a research project. So they are clearly saying that it's not a device that any consumer should purchase because they don't have any commercial available apps currently. It's more a device dedicated to researchers and developers that want to create apps or do research projects using these sensors and devices. So yeah, I was really curious to look at some testing that users have been able to do with that. And some testers mentioned some device that is quite uncomfortable because the sensors need to be really pressed on the head and the front part as well to have a good sense of the muscles, for example. But it seems like a lot of them were able to quite quickly use one of the demo, which is using the face muscles to drive a car or, you know, a Temple Run kind of experience. So, yeah, so basically that's it for the description of this interesting device. And on the OpenBCI web page, they push it even more on the potential capabilities of that full sensor. They are talking, you know, about telepathy, like being able to read using AI to translate kind of what one is thinking into actual words. So, of course, for now, it's in the future we will be able to, but maybe it's coming. So, yeah, I'm curious to know if you, because I don't know if one of you had already tested some brain computer interface like this. And I guess it's interesting to see also the two directions this BCI can go. So, one is outside sensors and the other one is similar to Neuralink. So, like really having invasive surgery to implant something in the brain, which, you know, is ongoing. The research is ongoing. So, yeah, I'll start with you, Seb, maybe. Yeah, I'm amazed by that kind of technology. I have tested once one experiment where you had to wear this kind of headset and controllable in the base. The part that is tricky for me is always the calibration part and making sure you will have the same result for each individual person and the same kind of way in the end to control the same way something in the experience. But, yeah, I guess there is a lot of things to do on this part. And they don't mention any SDK or any tools to do that kind of calibration. So, OpenBCI has a full software suite for, I don't know if they have calibration, but I would guess. So, I don't know exactly when they have a full set of software for that. So, yeah, I guess the training part will be the most difficult because every person has a brain wave that are different from another person. So, is that what they measure with this kind of tool? Then to understand what you are thinking or to control anything. Yeah, that would be the part that needs to be done correctly. So, the calibration process as well is easy to do and you can already use the headset for what you are supposed to use it for. What about you, Guillaume? Well, I'm kind of surprised maybe, I don't know. There seems to be a comeback of BCI technology this past month or maybe year. Maybe this is simply the next step into our VR integration in our daily use. Because I saw a post about NexMind, which was a French company that were making a BCI headband. But they were bought by Snap last year and there seems to be doing some public demonstration now. And I saw some influencers getting these headsets and these headbands and try them on. And they were announcing that BCI is ready for us to try and so on. So, just be careful with that. But there seems to be more and more companies bringing this on the front scene again. Because BCI is not a new technology, of course. It's been around for maybe 20 years with some success and some less success. Because as you said Seb, it's hard to calibrate. Sometimes it's not that comfortable to wear. And sometimes when you are not putting it the right way, it's not working as it should. So, I guess this is a good direction for us to bring some more captures and information of what people are doing with their VR headsets. And if they can use their brainwaves to interact. But as always, if it's uncomfortable, it's hard to calibrate. And if the benefit of using these BCI headsets is not proven, I guess we still have to wait for something. Or something that is working. But at some point, I'm questioning the advantage of using this. What they are willing to bring to the table with this BCI? Is this some new kind of interaction? Because we know that when you are interacting with BCI, it's not a very quick interaction. Because you have to concentrate and think about what you are willing to do. Click on the button. It would be way more longer than just looking at it with the eye tracking. So, do you know if they are providing some use cases of this? Or it's just a technical demonstration? Yeah. So, the usage that it seems to focus on is accessibility. So, for less able people that cannot type or have difficulty speaking. It seems like this is the focus of OpenBCI. And I think Neuralink is also targeting this kind of first medical and healthcare and accessibility use case. And, oh, a fun fact. Well, not really fun, actually. An interesting fact that I've read about is... So, as you were saying, the brain-computer interface are not very new. And there is a story from a few years back, maybe five or six years back, where a company developed BCI to detect signals of incoming epileptic crisis in someone. And it was actually working. So, that person was able to take medicine ahead of time. But, unfortunately, that company went bankrupt. So, that person asked his device to be removed and kind of, like, changed back his life to what it was before. So, yeah, they are also, like, really... It's not as... It's a headset that you use for gaming. And if it fails, you cannot play your game anymore. Like, it has real effects on the life of the... Day-to-day life of the users. And, yeah, that's another completely different topic, I would say. Yeah, yeah. Yeah, yeah, yeah. I understand the frustration if it's working and the company is not. But, yeah, it's a very good question about the market they are targeting, because I don't think that there are that many people that are having issues with their interaction. I don't know if we should look at the numbers, but it would explain why all these companies are making BCI for the past years are not, yeah, making any profit or becoming a huge player in the VR community. Maybe they are targeting something that are too small to be viable, and they are not finding their true market, their true use case for them to be able to develop this BCI usage. So, I don't know if you have more intel about this, but at first sight, I think this is not a very huge panel of users. Yeah, I wonder if, like, will it get better, like, so much better than also any user will find the benefit of using it? Like, at some point, would it be faster to think what you want to type instead of typing it on a keyboard? I don't think we have the answer right now, there. I think it needs a step-by-step progress, and it needs to be, like, something that helps you, like, you type only the first word, and depending on what you are thinking, it finds the word directly that you want and more accurate than what currently exists. So, you type faster, and from there, maybe it will get better. Also, a question, you mentioned that they are targeting making you a small target. Do you have any idea of how we could use that for broader use with this kind of device that you are trying to put on top of your head? Well, I think the ultimate goal is, I think there are two goals. The ultimate goal is to have a very quick process between what you are thinking and what is happening outside. So, I don't know, if we can think about automatic translation or typing super fast or navigating something super fast. And so, the other one is healthcare and medical usage, like preventing any type of stroke or it's kind of neuro, I don't know how to say it, neurotic disease, I don't know, that can happen. So, yeah, that's what I have in mind. Okay, and another question, the Neuralink, did you saw the same news? They are trying to do it both ways, right? To get the data and information from what you are thinking, but also to send back some information to your brain? Yeah, yes, yeah. And they are targeting to 27th for first release? Yeah, I think the ultimate goal, of course, they are saying that it's for, again, healthcare. But one of the ultimate goal is, you know, the merge between human and technology. Yes, it's just get the authorization to proceed on human experimentation. So, this year, the FDA gave them the go, so they should get some results really soon. We hope that they will get some results. Okay. Something to look at in that. Yeah. All right. So, if we don't have anything more to add to this news, maybe we can go to the next one, Seb? Yes. I have two different news about new glasses that we are using for auto. So, first, it's BMW that announced the new glasses that allows a rider to get information while he is riding on his direction. Like a Google Glass, but really nice glasses. Whoa, my internet connection is off. Sorry about that. So, that kind of information is really low in terms of number of information. It's only layout in front of you. I found that interesting in the design of the glasses, but really nice. I found that interesting in the design of the glasses, but really nice. Right after the video was posted, there was a lot of comments of people saying it's unsafe. We should not do that. Riders should not wear this kind of glasses because it's layout stuff in front of their eyes. It can hide stuff that are in front on the road. So, yeah, the feedback is very bad on that. So, that's interesting to see how many people replied to it and said that that's an unsafe way of displaying the information to the user. When I first saw the video, I thought that the information was really small in the view and light in terms of number of information. So, yeah, I wonder what you think about those glasses and those feedback about people. So, I don't bike, so I don't have the riding experience that this could provide and how beneficial this could be. So, yeah, kind of let's trust the users. But something that is very interesting here is what you said at the beginning is the design. Okay, finally, there is assisted reality glasses that have a cool design and that one would actually want to wear. One thing I'm wondering is, so, for bikes, they all have helmets, of course, and the helmets, they have like, I don't know how you say it in English, like there's something in front of your eyes. Visor. Visor, yeah. So, wouldn't it be easier, maybe more comfortable to have the information on the visor instead of having it on the glasses? I don't know. Yeah. I can answer to that because I'm a writer. So, indeed, there is, there has been some tried on solution with projected information directly on the visor. The problem there is that by adding some electronics directly inside the helmet, first there are some safety issues, because you are compromising the structure of the headset itself. So, usually when you are doing this, those are external elements that are like intercom for communication between riders. And it can snap off easily if you are falling. So, the company that were doing this integrated projection, I guess, stuff were in the back, but there were some concerns about the safety. And the other thing is using projection when you are riding in broad daylight is not that effective. So, there were some not so good effect on this. Not everyone is using dark smoked visors. Some people prefer it to be clear, and you can't project information on a clear visor, of course. So, about this, I would have the same concern about safety because when you are wearing glasses inside your headset, your helmet, sorry, you can have some safety issue as well because your glasses can be broken and you can have spots coming into your eyes if you are falling. So, you may notice that visors are made for not breaking and bring some rigidity to the helmet itself. About the information sensors, I think they are not that useful because those are the information, maybe there's a navigation one, but it's more like the heads up display. What could be interesting is having the real augmented navigation path when you have the hour directly on the road, because where they are, and how you are displaying it is more a distraction. Instead of being very useful so I understand the concern or the critics of other riders. I think when you're riding a motorcycle, your view and what you are looking at is very important for you to make the bike go where you want it to roll. So, yeah, I'm not that enthusiastic about this. The only part that I will get is, as Fabien said, the default factor is really interesting and you can finally have some glasses that look like glasses, and that are not ugly or futuristic or anything like that. On this, they are taking the right road, but on the use case itself, myself, I'm not excited about this. I prefer to have a clear view of the road and be focused on what is going on, and not having this optional or adding information that are not integrated in this view. Once more, if we had this augmented reality navigation view, it would be awesome. Like any car, the heads-up display is not something that I really enjoy. I don't know about you guys, if you have this heads-up display of your speed and maybe some kind of information about direction, but I don't like them quite much. Okay, and in terms of techniques, I'm wondering how it works. Because if you focus at distance and look at distance, would you see the information blurry because you are not looking at something in front of you? Or do you think you would have to focus on something close to look at the information? I think you will have a different focus between what you are looking at and the information on the road because you are looking far beyond where you are. It should induce some fatigue as well, because you are switching from something that is close to you and far in front of you. I confirm that. Because otherwise, you need to have some kind of tracking of the eyes to make sure you always display at the right distance while you are looking at the information. Yeah, subject that we covered a few weeks back with this innovation, where they could find out where your eye is focusing, but I guess they are not into reading. This kind of technology there is a pretty simple design. Yeah, I guess in a small factor like that. Another one is about Audi that revealed a new car that can be controlled with Magic Clip 2. So they removed all the interface and this is now displaying all the information in the Magic Clip 2 and interaction. You can interact with the car with your hand. So, it sounds interesting on paper, but when they relax and look at that, when they display the information they are displaying in the headset, it seems a bit gimmicky and it seems to shift a lot. So, I wonder if that's really a use case. I understand it's a prototype, but I don't see a future for that. What's your thought on that? I don't know if you saw this link, if you are discovering it right now. Well, I saw it and I'm sharing the same point of view as you. It's more gimmick or gadget or whatever. As the car itself, often Audi is doing this by doing some very futuristic concept cars or concept design. And I guess this is just a way of showing to the main public what AR is capable of. They are not targeting it as a real use case, I guess. Not right now, because as we know, AR is not there yet. But they are just willing to show that it could be a reality when you don't need a dashboard anymore, a physical one. When will you have these AR glasses that we will be wearing every day for a very high probability. So, yeah, it's more for the buzz or whatever. And if I remember correctly, it gives us some very good insight of what interface would be in the cars. They reflect a lot about this. But as a real use case, I guess we are not there yet. Especially when you are using MagicLip2 in broad daylight. We know that those glasses are not able to do so. And yeah, it's one of their limitations. It's a fun video, but yeah. I hope it won't be doing the same effect as the BMW video in 2010. When people were saying, yeah, okay, it's a reality. Next year we'll have AR glasses in our car and it would be awesome. So, yeah, maybe just take a little step back about this. Yeah, I mean, yeah, I totally agree. I think it's the reason of concept cars. They are just showing that they can be innovative, that they are looking to the future. I mean, the brand, the car brand. They can push forward into what's maybe something to come someday. But yeah, I think for now it's only gimmicks. What is interesting to me is they don't show up. Well, I think you said about the glasses from BMW or beyond, about displaying information on the road, and the geometry or information really in front of you. So maybe not on the road, that is in front of you information. That could use, for example, the sensor of the car and send that to the headset. So the GFR information are not directly in the headset, but were mapped and get from the sensor of the car. So I wonder why they... I guess they could go that way. The sensor exists, the information exists. So I wonder why they did not wait to show more something like that. Yeah, we know that Volvo is doing this with their value partnership. They are linking the car sensors to the headset, and these are showing real augmentation of the road. But it's just for them, their teams to make some tests. But the technology can do that, yeah, for sure. Yeah, but I mean, in 10 years, the cars will all be self-driving cars, right? No? Yeah. So yeah, you can ask the gold here, if indeed we won't have to drive anymore. But yeah, it's Audi that is working on VR headsets for passengers as well, for them to be able to experience some VR application while they are being transported, and especially to reduce the motion sickness and so on inside the headsets. So they are taking this road as well. I guess when people are in their car, maybe not driving, they could have some immersive experience as well. Exactly. There is also a team venture between BMW and Meta. Yeah. I think we can talk about it now. But yeah, they are working on, like you said, making games inside the car, and being able to play inside it. And the last news was about new techniques to do NERF and have better results. So it's on the paper right now, that your application is available to test. But the results compared to NERF rendering to generate 3D model of an environment is quite amazing. So these videos are talking about the technique compared to all the different techniques, and it's like you can see, maybe, the video is not lagging too much. If you compare, every time they compare from another system, you can see a lot of artifacts in the other technique. And the algorithm, it seems to be really amazing at making the 3D model really realistic. And is it reconstructing a 3D model, or is it more like just, quote-unquote, just NERF type of thing? It seems to be a NERF type of thing, that I will be able to move the camera in all the direction. Yeah, I had the question this week about, do you know when will you have this kind of rendering, but with a triangulation-slash-meshing capability of having a 3D object looking at that like that? Do you have, on your part, some information about what they are doing, maybe using AI to provide some better meshing technologies for us to be using this kind of 3D scan of the parametric-slash-NERF? Not yet, no. I talked about it in one of our podcasts, of having augmented reality using the NERF. Yeah, it was inside the app, yeah. But we know that the limitation of NERF right now is this meshing ability, because it's a really fun technology, but we can't really use it as we would like it, for example, in AR and VR and so on. Yeah, that's the next step, yeah, just to find a way to generate a 3D model that you can interact with. Okay. But still, you want to be able to reconstruct and position your camera wherever you want, you know, the scene. With that kind of quality, that starts to be impressive, yeah. Yeah, yeah. That's it now. I just wanted to add more feedback. Yeah. So, yeah, we just want to click the link. Yep. So, for my topic, I would like to discuss with you the new release of the Unity AI tools. So, they just released it as they were advertised. So, they are providing Unity Muse and Unity Census, which is basically a changeability version and a mid-journey version integrated inside Unity. And why I would like to talk about this topic is that the Unity development and artists that are using Unity, I kind of am not very fond of this technology. The reason behind that is that Unity admitted that their AI engine is learning from what developers and creators are doing with their scenes. So, it's once again the same issue with AI that we are currently seeing with bigger companies like Microsoft and Chattopadhyay Integration in 365 Office. The problem is that once you are using AI that is owned by a private company, everything that is integrated in AI is not your property anymore because it's integrated in the private databases. So, it's a huge problem for companies and creators in the broader way because they are losing their ownership or property of their creation. And right now we can see there is a very big step back towards AI, especially when you are providing or analyzing a very sensitive piece of code or very high intelligent property usage. So, the thing is a lot of developers are now talking about moving from Unity to Unreal because right now Epic Games is not on this page and they are showing more respect towards the development and creators communities than Unity is doing. We know there is some controversy with the CEO of Unity not talking so well about his client or users. So, it's a whole and we can see that AI is confronted with the reality of property and creativity of users and they are not liking it very much to have their creation taken from them. So, it's a very interesting situation right now. So, I wanted to talk about this with you. So, I don't know what you are thinking about this. Yeah, I totally agree. I think we mentioned that a few weeks back that we believe that the way AI could really work is if it's private, you can control the data, you can customize it with your data and that it's not something that is used publicly. Yeah, it's more privately for this because it's private companies that are making money through what you are doing and if it's public, normally they won't be using it. But yeah, it's part of the sharing side of the AI. But yeah. Yeah, yeah. And on that same topic, something that I've noticed recently is I'm using chat GPT sometimes and I deactivated the history and there is only one button. So, when you want to deactivate the usage of your data, you are also deactivating the history. So, here there is a problem because it's two different features that you deactivate only one button. And even more, the new feature of chat GPT, like God interpreter, that was just released a few days ago, if the history is deactivated, you cannot access that new feature. So, yeah, there are a lot of dark patterns, as we say, into AI usage. And that's unfortunate. Yeah, I understand the need of getting information on how the user is asking questions to the AI and what he's getting to make it better for the next time we iterate on it and check how, in the time, a new version of the AI can provide a better answer to the request of the user. So, making it better through time because you use the content from the user just to get a better AI in the end. The issue is always that then is it only used for the iteration of the AI and making it better, or can people access your data and your code to make something about it? If that's the case, that's for me the main issue. Yeah, the answer is not answered yet. This is why it's very problematic because apparently there are some proof that they can access the learning database for all the usage. And as it is managed by private companies, we don't really know what they're doing. And despite the fact that there would be some external laws about this, we can't have any control of what they are doing with the data afterwards. And I saw that some countries are taking measures and those AI helpers or tools are simply forbidden or they are not allowed in some countries. Yeah, I saw that Steam is also... I don't think that they are forbidding anything that was created with the use of AI, but they are thinking about regulating the user. Yeah, it's written there. It's valid in the whole that is not allowing AI-generated games. I guess this is what they're saying. But yeah, I saw the same information that they are filtering if you used AI at some point. And yeah, here. Valve appears to be removing games made with AI art from Steam, likely to avoid potential lawsuits regarding stolen art. So this is the same topics that we covered a few months back now about the property and how art is used with AI. We know that there were a judgment made in the US that if art was done through AI, you can't copyright it. There won't be any pattern of it. So it's completely free or whatever. It's a very gray zone of what you're doing with this and who is claiming the right to this because it's based on real art and creations at some point. So how do you provide financial retribution for this? Well, it seems like companies are kind of afraid of these consequences and they are just removing it or just forbidding AI content on the platform. So it's a really interesting switch of situation between what was happening a few months in November 2022 when people were just willing to use AI everywhere and now we can see that people are understanding finally what it's implying and they are just backing off this track and they are willing to have more respect for the code or the creations. So it's only for art creation? No, it's both for code and art. Oh, you mean the valve restriction? Restriction, yeah. Yeah, for now, I guess it's just for art. But I wonder how they can detect art that has been made by intelligent person. Yeah, there are some algorithms. I didn't try them myself so I don't know if they are working but there are some companies or some researchers that are doing algorithms to discover if the art is made by AI or not. So there seems to be some solution to that. And what about the story? If you use the JGPD to generate the story of your game the text and the voiceover? Yeah, it's a very strange path and I don't know how they will manage that at some point. Because there is a lot of work on NPC also in the avatar that discuss with you when you are playing and have a completely free way of talking with you with sentences that are not predetermined. So yeah, it's going to be a fun job to detect that kind of usage inside the game. It's a new job created by artificial intelligence. They will be trying to find out if there is some AI-generated content. You can put a watermark like you have on some sound services when you can buy sounds but there is a watermark at the beginning. I don't know. One other information I would like to share with you is that Unity was created in 2005 and this is the first year in 2023 they are earning money. So their revenue is in positive. So it's a bit strange to see that 20 plus year almost 20 year company is still there and they didn't make any money until now. So yeah, very interesting to see that. Once again, they made a huge improvement in their market value once they announced their partnership with Apple for the Appovision tool. So everything is looking good for Unity but the community itself is not happy with what is happening so we'll see if the numbers are impacted at some point with this angry community that is maybe leaving Unity for Unreal. Is that the only reason or is it because they are not reaching the level of quality of 3D environment that you can now have a high-end computer with Unreal? Well, the reason people are talking about is one AI and then the respect that Unity is giving to their community in general. Apparently, they feel like they are used like a client or just a wallet for them to make money. And yes, the new CEO is a real character by himself so we'll see how the relationship between the users and the CEO and the direction of Unity is the direction that Unity is taking is welcome or not by their community. And we can see that community is the key right now for a product to be successful. So I guess they won't be forgetting about this and maybe show some love to their users at some point. So do you have anything more to add for today? I'm good. Okay, that's good. So it's a wrap-up for today and thank you for your topics and discussion and we'll be seeing you.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}