Welcome to episode 39 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hi guys. So first of all, I would like to apologize for the latest episode, which had some difficulties. I hope we won't have this this time. It was due to network instability. So we'll see. Hopefully we won't have any issue today. So Fabien. Hello. Yeah. So today I want to talk about a new device that was released a couple days, weeks ago. And we already talked about that device when it was announced. But now it's actually released, which is the Vive tracker. So first, what is this device? So it's a device that you can wear on your feet, on your hands, and that can track its position. And it doesn't need any other lighthouse or sensor. It just by itself can track its position. And it's a complete change from the previous Vive tracker that you needed other sensors to track the position of it. So it's surprisingly expensive, $200. And the usage are mostly, as you can see here on the website, for usage at the same time. And so mostly what is really popular right now is to have a complete full body tracking in social via metaverses, quote unquote. Like via chat, for example, which is very popular. And it's the usage that you can see here. Of course, gaming, VR gaming, if you have sensors, you can have much more full body gaming applications with that tracker. And fitness, it's an obvious usage if you have full body tracking. It's much more interesting experience. And for VTuber as well. So this is a category that is actually expanding a lot with many, many new VTubers every day. So I think they would be very interested into having this kind of capabilities for full body tracking. So, yeah, I'm curious to know what you think. And as an end note on this is also there was a lot of usage with the previous Vive tracker that were not maybe thought about at the beginning. You know, to track like a prop in VR, like a gun, for example, in a VR game. So that's maybe something that will. To use it with is without the headset, basically tracking the user movement in front of the projector and display something depending on the user interaction. But also controlling, tracking an object, like you said, props, but also like a box in a training environment where the user needs to do some steps in a training session. And we need to track what has been done. That could be useful, too. But I'm wondering how the positioning works when you don't have a VR headset then. So I'm wondering if it's something feasible or if it needs absolutely the headset to work in sync. And also I saw that it's only a standalone version. So you will need a computer and a Bluetooth dongle. And with that technology, we had some issue on the booth at the CS for a Jeep setup that we did where we were using Vive tracker. So I wonder if the. That could be a question and what I will look forward to an answer to. Yeah, sure. It seems like it requires a dongle to connect to Vive XRL and Focus 3. So, yeah. I don't know. Maybe, you know, Guillaume. No, I don't. No, no, but, yeah, they are obviously targeting the VR chat, VR, VTubers and all those communities regarding the picture they are posting for this. More of that, they are just providing them in a three, three, three trackers package because they are intending that you are using their controllers as well. And one headset, because with only three trackers, you won't have any possibility to do a full body tracking. So maybe, maybe we could add like three others trackers to to have a full body tracking without any headsets. We know that the community is very strong about modding this kind of trackers. We have already done this with the first and the second iterations of the Vive trackers. So hopefully somebody will find a way of doing of using this as a as a full body tracking without any VR headsets involved. I have some concern about this technology because we know that it's using inside out tracking like the VR headsets. And when you are using these in an empty room or a warehouse, sometimes it doesn't have enough objects around to track itself. So I don't know how we can do if you are using it in a white room without much mess around the place you are using it. And my other concern is obviously the price. You name it, it's about $600 if you want this free three tracker package. And I'm just sharing another thing. It's the actual comparison of all the trackers that are using for VRChat especially and VTubers. So you can see that there are lots of different possibilities. You can see that there are lots of different possibilities. You can see that there are lots of different possibilities. But VRChat community is very, very powerful. So as they are providing this right now, I guess it's just for professional. And if they don't have this full body tracking capability, as said mentioned, I don't know if it will be used that much. So wait and see. But right now, I don't think they are checking all the boxes for success with the VRChat community right now. Yeah, there is two additional things that I want to mention. The first thing I want to mention first is we saw and we talked about it in the podcast a couple months back is the Quest 3 is supposed to have some kind of blend between sensor and AI to reconstruct the body position from just the headset and the controllers. So that might be something that the VR community is looking at. Maybe not the one that are not using VR for body tracking. And the other one is with the progress on AI, especially made by Move.ai, which they are just using an iPhone as a capture and they can, in real time, animate an avatar from that capture. So, I mean, pricing wise, if you compare with the price of an iPhone, that needs to be taken into account as well. But I think it's really interesting to see how hardware progress and software and AI progress and which one will have the better adoption, the better user experience and the better quality. Because realistically, to have trackers, as you've shown, on your body, it's very easy. You have like 12 hour battery life on it. So it's a very simple user experience as well. So there are a lot of variables to think about for people who want to have this kind of body tracking. Yeah, I guess, like you said, it's a matter also of the tools they provide with the device to allow a different kind of experience to be set up quickly. And for the VTuber, particularly, how to set up five, three or I don't know how many trackers, but one also on the elbow, I think that's important for the movement. And yeah, if it's easy to set up, maybe the community will switch to that, whatever the price is, if the quality is there, to make video even better in terms of quality to share to the community. At least the ones that are successful and have enough money to invest. Okay. That's it for me, I think. I don't know if you have anything else to add on that topic. No, well, we just have to wait a bit because it's very recent news. So we'll see how the market is and the people are responding to this. So I guess, Seb, you can go on with your topic, please. Sure. So on my side, I wanted to share some GPT-4 vision application that has been developed, different kind of usage that are quite impressive. The first one, I don't know if you see my screen, it's black on my side. Yeah, yeah. Okay. So the first one is, and we will share the link so you will be able to hear the audio because it's mainly audio. It's a narrative that is generated by AI and that is directly describing what is happening, like a real person commenting the game, the soccer game. And it knows, it managed to recognize the player. So it named the right name, the player, and exactly described what he's doing, who he's trying to intercept in. And there is some intonation and the voice is quite convincing. So, yeah, it's quite an impressive use, quite quick after the announcement of the GPT-4 Turbo version and linking that with the vision preview. So I don't know what is your thought on that, if you saw the news and if you think like me that it's quite impressive. Yeah, I don't know if you will be talking about this, but there is a Git project that someone posted where you can see your webcam feed analyzed in real time. So we'll be posting the link as well to this. And yeah, it's one frame every two seconds he's doing with his webcam or five seconds, I can't remember. And yeah, it's really impressive to see how the community is embracing those new technologies, especially chat GPT vision. It's very fun to see. Yeah, but basically, this was something that the community said about the open AI. Yeah, that's this one. Open AI announcement is that they weren't showcasing much about their technologies. They were just like, here are the tools, now do something with this and we'll see what will be done. And the community answered very well to this because like a few hours after the announcement and the release, people were doing those projects and putting them available for everyone to try and to make them progress. So I guess this one was one of the first steps and then people just like used all the other tools around available, merging it with the AI voice generation. And, of course, once again, it asks some ethical question, because on the example you used I guess they were using the voice of someone that is not commenting anymore, or is, or is deceased, I guess. So, we all know that some people are feeling nostalgic, some nostalgia about some commentators or speakers at TV, and it would be a way of making them live again. And this is the same question about virtual actors as well. And, for example, Bruce Willis that sold his own image for you to use in AI. So, people are a bit scared of the industry, not using new talent anymore, and just using old stuff and old actors and old voices because people like them. So, interesting to see how it would be done in the future but at some point maybe we won't have any more, any new voices or new actors will be only using old ones because it's famous and, and that's all. Yeah, I have many things to say about it. So first, a small note about the name. So, when you see models named Turbo, it means that they are pretty obvious but they are really fast into giving answers. So, and recently, there are a lot of models that are released as Turbo. So Stable Diffusion did one for the images, Whisper, the OpenAI translation voice to text tool has a Turbo version as well. And then the combination of the Vision and the Turbo models are pretty, pretty impressive. Yeah, it's, it's the, what we call a multi-modal AI so it's not only text but it's images as well and soon videos maybe. It's already sound so if you have the ChatDPT app on your phone you can talk to, to ChatDPT. So, so yeah, it's, it's really pretty amazing. It's, I, as always, still give the warning that these are not 100% giving the exact correct answer every time. So, we need to be really careful about this when we use it. But yeah, I saw in the video that you are showing Seb your usage that I actually used the, I had a document with a table, and I needed the data of that table. And like copy pasting would not work. So I just like use the DPT Vision to, can you give me the table into a format that I can copy paste and boom, I had it without errors. So, so yeah, it's, it's, it's, it will accelerate again the adoption of this usage. Yeah, pretty impressed. And something that we, we didn't talk about the drama that happened at OpenAI a couple of weeks ago. That's right. So, there was a lot of, of speculations about why this kind of event happened. So of course the short answer is we don't know. And the longer answer is maybe this Q star new breakthrough, but that might not be such a breakthrough on the idea of OpenAI to also move towards creation of CPU dedicated to, to AI. So, yeah, we don't know. But it was pretty funny story to, to look at. Yeah, one of the things I found scary the fact that it could track the encoding and of stuff. So quite easily compared to what the time it was requesting before so we'll see. We'll see if it's encryption that is not relevant anymore with this kind of technology. I can understand that it's scary to announce that to everyone. Yeah, so that's it for me if you want to move on to your subject. Yeah, sure. So, here we are. Why is that the estimate that the market of this would be around 35 billion euro by 2035 so they are very willing to be the main player here. And one of their goal especially is that they don't want to be relying on a foreign company and especially bigger is the biggest one. So we, we know that they are directly targeting Meta and Apple, for example, says they want to be the leader of this in a way that they are writing down the rules of how the Metaverse is working, and they want to integrate ECBs and companies and industrial to do so. So very interesting to see that the country that has this image of the innovation as Finland has is willing to do this. I'm very interested to see that we talked about the COO Metaverse. It was last week, I guess. And we, we think that it's not the, it's not Asia anymore, only working on the Metaverse we know that Europe is waking up about this as well. And what is, what is interesting to see that they want to be something that includes any people from any country as well, and that the, the, the enlightened three or five different topics for this is their Metaverse should, one of the Metaverse should be technology enabler, business network, Metaverse society, Metaverse health and industrial Metaverse. So it's basically all the topics that we've identified during the past years for VR to be very effective so no surprises here, just, it's reassuring that they have the same topics that that are used in the industry. So I'm very curious to see what they're going to be doing with this as they have some kind of, they are not, you know that some countries like South Korea already have some work, some platform that are available right now so they are starting the race with some, some, yeah, some delay so we'll see what they can do. We know that Europe can be very slow to start and we'll see what they can be, what can be done in, if they can match their goal to be one of the best country or one of the best Metaverse manager at this point. Yeah, I'm just wondering what will be the use case if it's only for person that lives in Finland, or if, because I see medical health for example, or will they open that to the world and anyone can go to the Finland Metaverse. And join it. Otherwise they limit the number of users to their country, the number of residents to their country. Yeah, I, what was this, what they are saying. I guess you could connect yourself as a CEO Metaverse as well, but you will be probably limited in some action, especially if you have some healthcare or services for the Finnish people so we'll see, but yeah, obviously it would be targeted to their population first. Yeah. Yeah, it's, it's very interesting to see, we talked about, and you mentioned it Guillaume, Japan, I think, has this, its own Metaverse initiative. We talked about Seoul last week, now Finland. It's, it's interesting to see how the big corporations have their initiatives and how the governments hopefully collaborate. Yeah, it's, the idea of the Metaverse is interoperable universes. So, in theory, one should be able to, I don't know, create their avatar in the Japan Metaverse and then when they go to Finland, they should be able to enter the Finland Metaverse for, you know, for example, I don't know if they have a health issue while they are traveling to Finland, for example, or something like that. And for now, and correct me if I'm wrong, but I didn't see any collaboration. Interoperability, I think, remains an issue here because everyone is doing its own, but there is no way of sharing, like you said, your avatars. Or you have to make it your own manipulation of doing everything. So, your own avatar is imported into the other Metaverse. So, a lot of dev work and not that obvious for users. So, yeah, I wonder how they foresee that. On Meta, you have an avatar that is already made for you on the device, and then it's up to the developer to import it and being able to... I'm seeing a lot of comments here. And, you know, it's secure and unique. So, yeah, maybe. All right. Okay, great. Didn't you have another topic, Seb? Yeah, that's a new video that has been shared where we can see a device that's been used to use using infrared light to capture the position of the veins and being able to display the veins that are... For this kind of use case. Depending on because I was using infrared light, maybe the infrared light will be impacted with the tattoo. So, that needs to be tested. Yeah. Yeah, I don't have a lot of things to add. I also really, really enjoy projected AR. The effect is always super nice to see. So, I'm curious to see how this could be packaged by a company into something that can be sold and how the adoption of it. Yeah. Yeah, I agree on this video. We don't see how small is the device from the user. Maybe it's a huge laser on top of it, which costs a lot of money. So, yeah. Okay, great. So, I guess it's all for today. We'll see you next week for the 40th episode of this series. And the 50th episode of Lost Inmersion. This is some kind of a big milestone. We are approaching the 50. So, we'll see what we can prepare for next week or the other. We know that there is our Gaussian splatting episode that is still pending. So, we'll see if we can have time to do so. So, thank you guys, as always.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}