Welcome to episode 69 of Lost in Emotion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. Hello, guys. Hello. Hello. So this is summertime. We can see that the news are way less than usual, but what do you have to talk about? Yeah, so actually I have one VR news and one AI news. The VR one is a bit old, but it's about Roblox, or like the metaverse, quote unquote Roblox. And IKEA, they launched a new space in Roblox where they say that they will actually pay some players to advise and to help other players in that store. So actually I went to the store. We can have a look. Yeah, here. So I didn't see anyone, but maybe I'm not at the correct timing to see the real assistants here. But basically it's the store where you have, you know, like a classic IKEA store with the kitchen counter and stuff like that. Seems to be a cafeteria here. And on the other side you have a showroom. On that side here with actual products. And you can see looking similar to other IKEA stores with the path that you have to follow in the showroom. So I didn't experience this advisor when I visited, but I think it was nice to see some kind of another economy kind of going into the metaverse. So where actual humans, not AI, would help other humans in this kind of brand activations. So yeah, that's it. I think it was an interesting news. So yeah, what do you guys think, Seb? I'm wondering, do you see the pricing of the product and what can you do with that? Can you buy them and it's linked to the online website? That's a good point. I don't know. They have like collectibles here, but these are like Roblox collectibles. I don't know. Seems weird to me to ask questions about products that are not exactly the way we can see that. It's more 3D model with not a lot of texture. So it's hard to really get. You have the shape, but you don't have the materials and the fabrics that you will get in the end with your product. And if you can't buy it, I'm wondering what is the point for a user to go there? Yeah, that's a very good point that I didn't thought about. I don't know. There seems to be some kind of game in there. You can see you have a score, 0 over 500. So maybe there are some stuff to collect or find in the world. But yeah, I completely agree with Seb. What's the point in doing this if you can't buy stuff? You can create your own kitchens and stuff with the IKEA software. It would have been nice to be able to do so in this environment. I'm afraid that this is once again just a marketing and very high-level experience. Meaning that you don't have much to do. It's just for the marketing stuff. And yeah, we created some kind of IKEA in the metaverse. But they didn't push the concept to where it would be interesting for everyone. You told us it was some kind of old news. So maybe when they announced it, there was a real agent. And now that the communication is passed, they are not maintaining it much further. Yeah, that's possible. There seems to be a lot of people walking around the place, which is weird to me. Yeah, it's been my experience so far in Roblox. There are a lot of people in Roblox, especially in the well-known rooms. So I'm not a very high user of Roblox. But every time I go to check out some news that I see and I go test the room, I always see people in there. So I think it's a very, very highly used platform. I don't think it is a very highly used platform, yeah. There are two people that are sitting in the sofa since you entered the environment. I don't know what they're doing there. I can sit there as well. Okay. I'll do the podcast from here. Okay. Yeah, it could have been nice to gather the different objects and select the one you want and create your own room and invite or have a contest on which one will do the most creative room or stuff like that. We are giving them a lot of ideas to improve. Free of charge. Yeah. Okay. So I will actually do a bit more research on to exactly what happened. Yeah, maybe as you said, Guillaume, the campaign is over. Cool. So the next piece of news that I have is something much more recent because I, if I'm correct, it was announced yesterday or the past few days, which is AI sharing. So this is for now only available in the US, but it's a piece of software cloud software that allows you to create an AI bot based on yourself, or you can also create characters. So if you are a creator, you can have like an AI twin of yourself, or you can choose to create any, any character that you like. And from, for the creator, obviously the advantage that they push forward is that you are able to do more. So anytime people can engage with your AI instead of engaging with you. So you have more time to actually create. And for character is more like. Yeah, you can basically customize and create. So, this feature is, we were very close to to see how it's applied and how successful it is. Because they are not the first one to do that. Which has a lot of characters so that are not based on you. So, this feature is, we were very close to to see how it's applied and how successful it is. Because they are not the first one to do that. One of the most popular is C.AI, which has a lot of characters so that are not based on creators, but that are like fake AI. Sorry, fake AI characters with personalities and they also like create famous people. So, you can discuss with the AI version of Elon Musk, for example, on C.AI. And actually it's pretty impressive the number of usage that they have. I found this, they have like 20k messages per second on their platform. So, it's incredible the type, the quantity of usage here. And so, that could be great, but it seems like a lot of people are getting actually addicted to this kind of character and spend a lot of time discussing on this platform. So, I think the most famous post is this one on Reddit. Someone explaining that they are addicted to C.AI. And below you have a lot, a lot of comments of people who resonate with this. So, they are also getting more and more supplied in apps. We'll see. What do you think, Seb? So, just a question, right now it's not available, right? It's available only to develop and test in the US, but you can't deploy it on, like you said, WhatsApp, Instagram or Facebook. So, I wasn't able to test it, but I think it's available, yeah. I tried it with the VPN, it wasn't available. So, yeah. Okay. Okay. Yeah. Yeah, people are starting to get emotional with the AI character and the way they answer and their behavior. And yeah, that's scary. Okay. And Guillaume? Yeah, I just tried the link to create a new character or your clone. And the link is just broken right now. So, I don't know if it's a server issue or if they are not ready yet. But you have the main page, but once you want to try it or create something, I'm getting an error. Yeah, this is. So, yeah, we'll have to try it later because I should be able to get this if it's for the US and usually it's Canada included. So, I'll see if it's available later on. Yeah, for the 3.AI, we talked about this, especially when we mentioned the butterfly social network. Which is a social platform where AI and human are sharing experience. There were some kind of, there were lots of connections for this new social network, but I don't know where it's going on. Where it's going, sorry. And this is exactly what Meta is trying to do. So, I don't know if. I guess they tried letting people create fake AI characters. You have a lot of them now on Instagram. And as people are not arguing or, yeah. They are just embracing the fact that some people are not existing in Instagram. It doesn't bother most people. So, they are just continuing in that way. Allowing us to take a step further with this AI. Conversational AI for you to be able, for users to be able to discuss with. So, I just looked at this 3.AI because it has been some, quite some times now that I've been to this website. And I see that you can have a psychological interview with an AI. Supposedly trained to help you. I find it a bit weird to find these kind of services. Especially on this kind of platform which would be more for the entertainment or social, I don't know, social entertainment. So, I find it a bit, yeah, scary. But maybe it's not the place for you to have psychological advice or, yeah. It's a bit weird for this one. Yeah. But yeah, as an old person, I don't get why people would be interested to talk to an AI. But apparently, I'm too old for this. But yeah, we'll see what it will be doing. But I guess it's one more step toward that, the fact that we don't know who we are talking to on the Internet. And everything could be fake at any time. So, we are approaching this time where everything would be, yeah, suspicious. So, I don't know what it would be, what Internet would become and what social network would be with this kind of attitude. Yeah. So, I tried multiple personalities on AI. And so, you know, you have like dating coach, you have characters from anime or from movies. And so, maybe similar as you, Guillaume, I'm too old for this. But really quickly, I found that like the first maybe 20, 30 messages are actually quite fun. And then it starts to go into a loop of kind of repeating itself and trying to find, I don't know, maybe they have a storyline to respect. And if I don't go into that storyline, they're stuck. So, I didn't find it that addictive to me. So, I got bored pretty quickly. But yeah, as you mentioned, you have a lot of really strange, I would say, things that you... Out of place. Yeah, out of place. I would not expect to see there. Yeah. But yeah, it's a symptom of what we could call now bad AI or primitive AI. When you can loop them quite easily and it's very easy to spot now, especially when you are comparing it to the latest conversational AI like Clouds or TTT 4.0, which could have a way longer and continuous conversation. I tried Moshi a few weeks back, which is a real-time conversational AI. And indeed, it answers like instantly. But after a few messages, you can see that it's just looping around and that the answer is not very interesting. And yeah, it clearly shows that the AI model behind that is an old one, maybe 2022, and now we can put a time marker on this kind of AI, which is very interesting. Okay. Seb? Yes. Just to finish on that, I think the new model, Lama 3.1, seems to be very close to JetDuty 4.0 in terms of parameters and quality for different domain. Maybe not all of them, but for mathematics, I think they are very close. For example, code also. However, Cloud is always on the top, the latest version. So yeah, it needs to be tested, I think, to see how fast it replies and how good the replies are with this new version of the AI Studio. Mistral also released three new AI, one for mathematics, another one for the code, and the last one is the replacement for 7B, which is called Mistral Mismo, I guess. Yeah. I didn't try the latest Mistral. I tried the Lama 3 and didn't get good results. So I went back to Cloud. But yeah, maybe my questions were not the best ones for it. Yeah, but it boosted a lot the model for the 3.1, apparently. Okay. So yeah, but it needs to be tested on that. But the picture generation seems to be quite good, actually. So I think the Studio is not only for creating characters, right, Fabien? It's also to generate pictures, generate songs or music? Oh, you have access to the AI.meta, where you can generate – it's basically the same access you can have with GPT or Cloud. I didn't try this for quite some time, but yeah, you can have access to this right now. Yes, same as you, Fabien. I need to use a VPN to connect to the US and try it. All right. So on my side, I want to talk. It's SIGGRAPH right now, the event around AI, 3D graphics, robotics. And this week, a new paper released a new version for ColMap. So ColMap is the tool that is mostly used for NERF and Gaussian splatting to generate upon cloud based on a picture you have taken with your – it's called Glomap. And it seems apparently to generate much more faster upon cloud and to also get better results in the upon cloud that you get. So that needs to be tested, but I think we will soon see the implementation of this new model for the generation of the picture. So it should be of the Gaussian splat and the NERF. So it should fasten the process a lot because this phase in the process takes a lot of time. It basically compares all the pictures to one another to get the position of the camera where it was taken and generate upon cloud of the environment. Yeah, I think it's interesting to see evolution on that part. There was not a lot of – ColMap is, I think, very old now. And there was no improvement on this side. So yeah, I'm glad to see that it's moving forward and that now it seems to be way faster and way better in terms of quality. So Guillaume, I don't know if you want to comment on that. Yeah, this update is welcome. As you mentioned, the ColMap phase when you are doing Gaussian splatting is one of the most frustrating ones. So if this part would be improved, it's really, really great news. Do you know if they can now have the 360 video or images as an input? I'm not sure. No. Okay. Because this is one feature that is well – it's very, very waited for. Because once again, it's one step that is getting some times and maybe you have to adjust your parameters for this. And if it could be completely automated, it would be great. So okay. Do you know if it's released yet or it's just the announcement of the – The code is available. Okay. Yeah. It's on GitHub. Okay. So yeah. I guess the results would be the same. So could you just have to – Yeah. You can swap ColMap to LumMap. Yeah. Okay. So I'll try this. Fabien, do you want to comment on that? No, not really. I mean, it's cool. New updates, new – It seems like almost every week there is something going on on that Gaussian splatting slash nerf area. So that's great. Yeah. Okay. And more on that. NVIDIA just announced a way to simulate physics on Gaussian splat and point cloud object. So a nerf for Gaussian splat. And have directly physical simulation generated by AI, basically. So they are modifying the point cloud for them to create simulation? Yeah, the formation and the physical-based simulation of how it will react depending on parameter that you enter. And this muscle is the example that is available that you can download and test. So you can change the parameter of the muscle, its resistance and stuff like that, and it will react and do the actual simulation when you run the simulation on how it should behave. So very impressive. So maybe we're just witnessing the end of the mesh era. Because if we can do anything that will force us to use mesh without them, I guess it's not relevant anymore. But we are far from what we can do with the mesh, but yeah, we are attacking this way. Because we have the environment, we have the motion capture, now we have physics, sort of. So, yeah. To bring all of that together, yeah. Can we do a video game in Gaussian splat? You can add objects right now. It would be Gaussian splatoon. Okay. Fabien, do you want to? That's all I had to say. Yeah, I totally agree with you, Guillaume. It's like the trend that is going on with Gaussian splatting, Gaussian splatting is getting a lot of attention, and a lot of people are working on it. So, yeah, we can maybe bet on that being the future of 3D representation. And that's exciting, yeah. The next one is again from NVIDIA. They just announced yesterday and show that on stage, which is a person describing what she wants to see in a scene with her voice, and you see the scene being built and changed depending on what she is asking to do. It's apparently running on Omniverse and getting all the 3D model on their library and generate from that base of model with Voxel and AI, different variation of those models that they have enabled. And so, in that video, they are showing a young girl talking, asking to render a forest, add some stairs and some vegetation and some rocks, and the environment change along the way. So, I guess the fake part is how fast it goes, but it seems to also go very fast on this part of generating 3D model now. I will show you afterwards another AI product that is now available to generate model, same kind of way as NVIDIA announced it here. So, yeah, Guillaume. Yeah, so this is for creating 3D rendering. So, they are targeting advertisement, marketing, and possibly movies and series. As we can see here, it's a LED incrustation of a character with the camera movement tracked. Okay, so yeah, I guess basically we have the same question as you, meaning that how much time does it take? Can it create, like, diversify environment? Because you mentioned it is based on their own database of 3D models. So, maybe one of my questions is, do we get this, like, getting always the same thing as we try it? So, we'd have to check if when you are asking for basically the same thing, is it just generating something identical or we have some differences here? Because we already know the capability of Omniverse to create realistic rendering, so I'm not surprised by that. But they added this. Yeah, we'll see how it materializes when you can try it. Did they mention a release date? No, not yet, no. I'm giving you the mic, Fabien, if you want to say something. Yeah, I see on the Omniverse, if we have some news. Yeah. So, as a demo, it's pretty impressive. And yeah, I see also, I think we mentioned that in a previous podcast, but I see also applications on children experience exactly as it is in the demo. Like kids can describe what they have in mind and see it created in real time. Well, hopefully in the future, it will be in real time. That could be... Yeah. The point of having the ability to create your own environment and share that with others could bring another value to the metaverse and allow you to generate your own personal environment where you bring people to play with. And it generates USD content, so that's why they're showing that it can be exported to a tablet or different, like here, a website or stuff like that. All right. And the last subject is about Clay, which is a 3D generation tool, which seems to be very impressive in terms of quality that you can get from it. So, it generates the same way as pictures are generating with a cloud that are generating this way. They are doing the same with Voxel. That's what they are showing here. So, in four seconds, they generate a 3D model of an element. And then they have the ability also to texture it. And when they show the comparison from what you get from other tools and their own, yeah, the quality seems to be very amazing. Can't wait to be able to... I'm in the waiting list and can't wait to be able to test this one. And as you see here, different map that you need to get the best look for your 3D model in a 3D environment. The picture they're providing as input and the 3D model that has been generated by their model. With your basic shape and use that as input to make sure all the position of your object in your final output on your 3D scene are correctly positioned where you wanted it to be. So, like here also, they are showing that they generate a head and then they decide to add it on top of a 3D manoid character. And they iterate on the version to a different kind of model. And it's integrating to Blender. So, here they are showcasing that they can type directly, iterate, create a queue, say, oh, this would be a TV in my output and it generate a TV right away. And here, they are showing that they can generate a TV on top of a 3D model. And this is a sample of what they've been able to generate. So, yeah, quite an impressive 3D tool here. I don't know what you think about it. The texture is something that we haven't seen yet. So, very interesting to see that. I made just some research as you were speaking. And I guess... I guess... ...the best one because it's really hard to find this. Because there are a lot of clay up and AI especially. So, yeah, maybe they will have to rename it at some point because on the marketing side, it's really not efficient. So, yeah, can't wait to try this. I don't know if we will get our hand on this or in the close future. But, yeah, as always, all these 3D generation, AI generation model, they are always very promising and showcasing a lot of great results. But once we try them for a specific use case, it's usually, yeah, less interesting because there are some issues and you have to spend a lot of time to do some touch-up and modification of the model. So, this is really what the soft spot is. It's are those models usable as is or do we have to modify them after that? So, this is the key and the one that would be able to achieve this. Yeah, they have a great market for them. As we mentioned before for metaverses and for people to be able to create 3D and especially immersive experience. It would be a game changer because lots of people are not able to create 3D models. It's a very hard task for expertise to acquire. So, yeah, it would change a lot of stuff for anyone to be able to generate quality 3D models. Yeah, it's impressive if it works as well. Yeah. Interesting to know how it works. Yeah. Yeah. Okay. Okay. So, maybe we have a few hours for... I guess we'll be at the end of Seagrass. So, at the end of this week when they are coming back. So, we'll see. Okay. So, for... Oh, sorry. My share screen. So... Getting a new headset, which would be, if the rumors are correct, some new iteration of the Vive Focus 3. The rumors are saying that it would just be an improvement on their chipset. It would be the latest generation of the Snapdragon. So, I really would like to have your insight about this kind of strategy. We know that HTC, since the first headset that saved them, in short, because we know they had some issue with the smartphone branch and they put all their effort in creating a very nice headset back in the day and they just won the battle against Oculus at some point. But right now, they are... Yeah, they are... They are maybe not the last position, but they are not well positioned right now on the market because their headsets are judged to be too expensive and maybe less powerful than the competition. So, I don't really understand their marketing strategy as we don't understand the meta marketing strategy either. So, what do you think that they could achieve by just making some cheap improvement? Some people in the community would like it to have the lenses of the XRLs for it to be more competitive. But do you think that they could just do this update and that's it? Or maybe lower the price because it's $1,300 right now for the Vive Focus 3, which is quite expensive regarding the specs and what is going on with the competition. And especially with the announcement of Samsung releasing their headsets by the end of the year. So, very weird strategy if it is the case. They should be doing some business. Otherwise, they would have just canceled their headsets and stopped VR production. But apparently, some companies are buying the Vive Focus 3 and maybe the next iteration, even if it's quite expensive. So, I know that you've worked with the HTC. What is their strategy right now? Are they just focusing on the professional and they are providing their services accordingly? What do you think about this strategy? Do you think that it would be maybe one of the last headsets done by HTC? What do you think? Fabien? I will let Seb expand on it. I think Seb, you have much more knowledge than me. My perception of it is, for example, in all location-based entertainment, it's HTC everywhere. So, it seems like for professional usage, if you have a lot of headsets to have at the same time, HTC is kind of like the best choice. So, yeah, maybe in this kind of situation, they have like a turnover of headsets. They get broken, maybe they get replaced. So, having a replacement that upgrades the quality and that's not that much expensive, maybe it makes sense. So, that's one way I see it. Yeah, Seb, over to you. Yeah, it's really, like you said, Fabien, mastering the location-based entertainment. So, in all the VR experience that you can do, the EVA, I don't know if you have EVA in Canada, but there is like 35 EVA places in France now where you can go and play 12 together at FPS or different experience that they have in their catalog. You have an account, you can go back and experience the same, continue on the same experience and continue an experience that you have done. So, this is not EVA, but others that are using also the Focus 3 because it's really reliable. It's robust for this kind of usage. They have all the LBE solution to make that reliable also in terms of scanning and quality of tracking in the environment. So, yeah, that's why. And the swappable battery is perfect for this kind of usage. It lasts all day long without the issue of temperature and stuff like that. In terms of design, it's the best design. So, my guess is they try to do the same with mixed reality to go more into industry, industrial use case. Which would be more using mixed reality for these kinds of things. For the entertainment VR room, I think it's already... The Focus 3 is already the best device for that. So, they try to keep the same form factor. I think the XR Elite was great, but they did not sell that much because of that issue of having something reliable. The way the swap of the battery is done is with removing half part of the head strap. And so, sometimes when you put it on, it disconnects. So, you can feel comfortable installing that on all of your clients or users and make sure it will not fall down on the floor and breaks. Here, the form factor seems to be really reliable in time. So, that's why they are pushing the same design. Adding mixed reality to it. Okay. Okay, great. Well, thank you for this feedback. And yeah, it makes sense for them to have this market. I guess Oculus is not able to provide those kinds of large batches. And maybe it is a real ability as well. Maybe Meta is not. And yeah, of course, because you have to use the Oculus now, maybe a Horizon account could be making it more difficult for professionals to be able to use this. So, okay, it makes sense. So, do you have any more news or topics to discuss? Okay. So, I guess this is it for today. So, see you guys next week for another episode of Lost in Immersion. See you guys. Thanks. Bye, guys.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}