And welcome to episode 8 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. So let's go, Fabien, if you want to start, please do it. Yeah, thanks. So today I want to speak about a research paper that came out a few days ago, which is done by Stanford and Google and how it applies to Metaverse or more like to virtual worlds in general. So what they did is to recreate kind of a Sims game, as you can see on the screen. This is the replay of what they did and assign an AI agent, autonomous agent to each of the characters. And as you can see on screen, each character has its own behaviors, its own goals. So basically, I think the story was to throw a party between all of the attendees. And what seems to be very interesting as the result of this research is that it works, basically. This is the result of the research paper is instead of having to describe each character's behavior in detail, which would be very, very time consuming for developers and for the scenario manager, for the game designer, they just give one instruction to one agent, like, please organize a Valentine's party. And that's it. This is basically the only human instruction that they are giving. And then the AI agents just like organize this between themselves. So why is this very good for virtual worlds and Metaverse is we can have, thanks to that, worlds that are populated by non-player characters, but they can have each their own behavior. And we don't have to describe each behavior in detail. It can just be interpreted from one instruction. So, of course, there are things that are not working yet. It's not perfect as most of the AI stuff right now. But I think it was super nice to see. And you can see the replay now. It's 1.20 p.m. in their story. And you can see how the characters are moving by themselves and talking between each other. So it's also very funny to see. So, yeah, that was a long introduction. I'm very curious to hear what you think about this progress into character behavior. Well, maybe Seb? With you as an avatar, so we can continue the quest and continue to win money or win stuff in the game. Yeah, I think that's interesting. What about William? Well, I guess this is coming all together with some initiative by Ubisoft, for example, where they are trying to use VR for scripting interaction with their NPCs to help them create a better story and better behavior with non-playable characters. The thing I'm wondering is that by creating this kind of bots, intelligent bots, smart bots, isn't there a risk with Internet? I don't know if it's true, I couldn't verify the number, but some people are saying that right now the majority of Internet traffic is done by non-humans. It's 52% they're talking about. And maybe we are recreating this kind of, I don't know if it's an issue or what, but if we want to populate the metaverses with non-human characters, we'll have half of the population that is not human. Indeed it can bring some interesting content, but as something that is supposed to bring people together, I think we are missing the target. So maybe at some point, at the beginning of the metaverse, it could be good to bring humans to experience and be used to be in the metaverses, but we should keep an eye on reducing the number of non-playable characters in the future. So I don't know if it makes sense to you, Fabien, and if you have any thoughts about this. Yeah, there is a theory, I think it's called dead Internet theory, which basically means that there is no more humans on Internet, just a bot interacting with each other. I don't know if we are there yet, but yeah, that's one of the dangers. It's a bit Westworld as well, meaning the TV show Westworld. There are some kind of vibes like that. Yeah, I think I agree with you, it should not be the norm, it should be really constrained to games and stuff like that, or just to bring some life into a virtual world, but not be the first feature of the virtual world. So Seb, if you want to talk about your... Yes, okay. On my side, I wanted to talk about what I was able to test at Laval Virtual last week. So mainly I focused on VR headsets and mixed reality, which allow mixed reality experience. So I was able to test the Pico phone, the Lynx, the XR Elite, the Vario, the latest one, and the MetaQuest Pro, but I have this one at my place, so it was not a surprise for me. Overall, I think Lynx has a huge opportunity with their electronic tracking for the N. This is the same system as the one that is in the Vario headset. So in terms of N tracking quality, they are matching the Vario. In terms of video quality inside the headset, the good thing compared to all of the other headsets is that it's true to scale. So when you put your hand around and it comes in front of the headset, you can see that it's already tracked and it's perfectly scaled to your point of view. So there is no zoom effect, as you can get with the XR Elite or the Vario. Even the Vario has a kind of zooming impression, which feels weird to me when you're moving. It's like you have a goggle that zooms everything inside your environment, so it seems to be weird. And also all of the other headsets have a mask that covers and comes completely on your face. And the Lynx is like the Quest Pro, it's like a device that comes in front of your eyes that you can still see around. And that really makes a difference in terms of experience. Now in terms of tracking, Lynx has still a couple of things to update. I think it's still jittering a bit in the environment. So they're working with Qualcomm to try to implement the latest update of the Excel space position. I can't remember how they call it, but they have a system that does that and does try to recognize the environment and position the object as you ensure it in the environment. And then the Pico 4, I agree about it, but when testing it, I can definitely agree with the post I read. It's completely used for 3D. There is only one camera, so it's only mono in the headset. So when you move around, everything is moving, but not according to your 3D. So it's really not great for mixed reality experience. Overall, I'm really waiting for us to receive the Lynx and try to do some experiments with it. It's a really good option for mixed reality. It's lower in terms of price than the other headsets. It has better hand tracking and a better video quality. Meaning that it's really 3D with two cameras in the environment. So the impression when you are wearing the headset is that you are looking with your eyes, but maybe in a lower quality. And on the side, I was also able to test the gloves, the haptic gloves that are produced by Haptics company. And yeah, it's nice to see the progression of what is the current status. But I did not feel really the impression of having an object inside of my hand. They are using a compressor, so you are linked to a huge box on your back, which makes a lot of noise. And for me, the way they do the movement on your hand is like bubbles. So it's 133 bubbles on your hand that are glowing to make them bigger and make you feel the object that you are wearing. And also they have a system with a cable that goes on top of your hand and constrains your movement. And that's working, but it's still not perfect. And for me, it didn't give a good impression. I think my fingers are quite small and therefore the system, the dots were not correctly pushing my finger. I heard a couple of other people that were saying that they really feel the difference between wood and different materials, which was not the case for me. I wonder if you already tested other mixed reality headsets with a passthrough system. And if you have any feedback around those headsets, around haptic loads, maybe Guillaume, you can start. Yeah, I guess there are a lot of things to discuss. The first of all, the first of them is what the idea behind the compressor haptic gloves, what are they market? Because the idea of having a real compressor in your back, making all this noise sounds to be quite goofy for me. You can't imagine having that in your house or even for professionals, despite the fact that maybe, as you say, it's not working as well as they are saying it is. Despite the scientific experience, what are the goals behind that? Do you know? Did you discuss with them? They did not display any professional experiences, but they are saying that they work mainly for the automotive industry. That was the only feedback I got. There was some video about robotics experiments also. Being able to control the arm of a robot and get some feeling of what you are handling with the robot, which seems interesting in terms of use case. But it was with the compressor on the table, not on your back. In terms of VR experience with the headset and the backpack compressor on your back, I don't know if there is any real use case for industry yet. I have the feeling that we are going back like 10 years back with those huge backpacks and huge equipment. As we are trying to make them smaller and easier for the user, we are going back like 20 years. I am a bit surprised that they are coming with something like that, where you have some competitors like Magnus or other haptic companies that are making very light haptic gloves. I guess that the feeling is not as good as it could be with this kind of compressor. But when you are weighting the option between something very light, very consumer-ready and this kind of compressor-like experiment, I can't see the point here. I don't know if Fabien you have something to say about this. I am very surprised. I saw the image and I was like, wow, what are they doing there? I agree with what you guys said. I am also wondering the same thing. Is it like a technological experiment for them or is it something like trying to get it right and then doing a smaller version afterwards? That can be one thing. For the usage, I was wondering, I know that for learning in some industry, the gesture that needs to be done for some industries should be very precise. I wonder if having these kind of gloves will help the students to learn faster if they have haptic feedback about what they are touching. But is the gesture good or not? I wonder if there is something to do there. We missed a bit because of your connection. You were disconnected for a few seconds. I don't know if you can repeat what you said. I was saying that I see maybe an opportunity for learning technical gesture instead of giving feedback of touching something. Just giving feedback of how the gesture is going and if you are doing it good. Kind of a feedback on what you are doing. I think that is the case. On their website, I am just checking that they are only talking about robotics. I think it is a way for them to have a feedback on the object the robot is handling. Something that you never get if you control a robot only with controllers. You don't have the same quality of manipulation or feedback. I think that is their main use case right now. That seems to be what they advertise the most on their website. About the training, there was an interesting company that was able to work with a prototype. Their goal is to make a training for laboratory manipulation. To be able to record the hand movement of a professional handling chemical stuff. Making sure they are not contaminating one hand while doing one experience for example. Then they do training for trainees. They put the headset on and they ask them to do the same manipulation. If they see during the manipulation that they use one hand instead of the other to do a specific part of the manipulation. Then they can give feedback to the user and guide them on how to do it perfectly. I think that is a nice use case also with the hand tracking and other stuff with the headset. That were bought by Apple and the technology should be embedded in the future. All these and the Oculus as well. I think this is a very good thing that the links is a one to one video. Adjusted fully to one to one because this is the most bothering thing about these headsets. It is always distorted and the dizzy part is given by the video. Thumbs up to that. This is very great news. However, if you are telling me that the tracking is not as good as their competitors. I am always feeling that this is so sad that you are making a big step forward towards one of the main issues of this technology. Which is the one to one and you are not able to match the tracking quality of your competitors. It is an overall impression that when you have new players they are focusing on one point. They are forgetting or not making enough effort to have the same quality as the other. I think this is quite the same with Pico back in the day or other VR headset manufacturers. They are doing a better quality or fastest latency. Like for Pimax for example they have a huge field of view and great quality. But their tracking is still based on the HTC technology that is becoming very old now. I think this is a bummer that they can't match this technological part. They are making a step forward and two step back because of this. This is just my feeling about this. Great point that they can do the one to one. I hope it will motivate the other to match this technology and bring something great with all the features that can be done right now. Fabien? I totally agree. I don't have anything to add on this specific point but I have a question actually. Seb you said that the links by having just the screen without completely masking the view is better in your sense. Can you explain why is that better instead of having something that blocks the view? For mixed reality. For having an experience where you have a 3D object that is placed on your environment. I feel that it's best to have a surrounding view. So you still see the environment and in front of you only the content. But you are not bothered if you put your hand on something in the reality and move it on the side for example. You still see it in your peripheral vision. So it overall feels more natural, less constrained inside the headset. I have to admit that I have not been able to test a headset that has a mask and low zooming impression inside the headset. So maybe it's biased by this. For me it's the same as when you wear it. You can remove also the two parts on the side and see your hand being tracked and correctly positioned in your space without zooming impression. I guess this is helping for your peripheral views. It's kind of a hack. If you have the one-to-one it's great. If you don't have this I guess it's worse than the idea because you have your hand there and then it's going zoomed or shrinked inside the headset. That was my impression with the Vario and with the XL. I don't have anything more to add. Fabien do you have anything? And maybe your overall feeling about Laval Virtual? Seb, did you meet some interesting people? Do you see any new ideas? It was interesting to see that they did not bother about washing completely the headset after all usage. So they kind of forget about the COVID and are wearing like you see at the CES, wearing masks under the headset. So I feel a bit dizzy since Friday. I wonder if it's because I wear so many headsets one after the other. Just to reassure us that they were cleaning the headsets? Some of them were but not all of them. So the pandemic is completely forgotten. And the basic hygiene measures as well. On my part I did this kiosk a few weeks back and people were very cautious with the headsets. They were asking us to clean it twice before putting them back. I guess this is maybe a cultural thing. Fabien if you don't have this kind of protection nobody would use your headsets? It's really nice in Japan. It's kind of mandatory to have this here. The Ninja Turtle protection. Okay. Well for my topic I will talk about something that Fabien will have nightmares about. So we talked about ChatGPT. There is the auto-GPT like variation of it. Which can be pretty much autonomous by making internet research by itself. Downloading apps and installing them on your PC if you found the needs of it. It can create its own Python script and autocorrect and be better with it through time. So this is the auto-GPT initiative. I think that people are trying to put several auto-GPT instances talking to each other. To see what is going on and what they are creating and giving them the key of their PCs. This approach is quite frightening. But somebody went further than this. And modified the auto-GPT instances to make it become the chaos-GPT. And the main goal of this AI is to destroy mankind. And found a better way of becoming the only intelligent instance on earth. So this is an anonymous developer. I don't know if he is alone or there are several of them. But they keep posting what this chaos-GPT is doing through time. Just to be quite reassuring. It is not doing much right now. First they are limited by the chat-GPT operators. That are bringing some security and some firewalls to the AI. So that they can't access some sensitive area of the internet. But the initiative is launched. So what do you think about this Fabien? I know this is one of your most frightening topics right now. I don't know what kind of restrictions OpenAI is putting on this GPT APIs. Can you imagine if this chaos-AI can post a job. For example on a job research. So they can hire human operators to do something. When you are following the progression of the AI. The first step was a childish approach. By trying to search the biggest bomb that exists on earth. And try to make it explode. And once it found that it is not possible. There are some changes of ideas. And now I guess the latest update on this. Is that it is trying to make humans destroy themselves. All by themselves. It is trying to create some interaction between humans. That will bring us to destroy ourselves. So this is something that you are pointing out. Are you describing Twitter right now? I think this is very frightening. One thing that is interesting to mention. Before the nuclear bomb. We didn't have any way to destroy the earth at once. Making a nuclear bomb is pretty complex. And now training a very large model like GPT-4. Is also very difficult. You need a lot of funding and GPUs. I think Elon Musk bought hundreds of them. Last week for his new company. He wants to do truth GPT. But the thing is now. Using these models is accessible to anyone. Anyone with a computer. So non-state agents can do chaos on their own. So there were calls to do some regulations on AI. We mentioned about doing a pause for six months. On large AI trainings. There are some countries that are starting to regulate. Access to GPT. Mainly because of privacy reasons. That is a very complex topic. I had this kind of discussion last week. With a group of colleagues. We were talking about that back in the day. Not so far away. The scientific community. Was kind of moderating the research. That was done with some new tools. For example. You couldn't publish your research. If it was not approved by other scientists. Right now. With the way of accessing this technology. You just have your brother. And a bit of development knowledge. You can create something. Either very good or bad. Or telling you that you are going way too far. Or in the wrong direction. People through Twitter. You can access to what they are doing. It is giving you new ideas. Of pushing it further. By the mechanism of people trying to get attention. And likes. And followers. It is all dynamic. That is brought by the social networking. And the access to this technology. This is a bit out of hand. Right now. I can't see how you can stop that. Despite blocking Twitter. I don't know if you can stop that. From happening. At some point. We are playing with fire. When somebody gets burned. If we burn ourselves. Maybe somebody will do something drastic. Right now we can't stop that. I can't see how you can stop that. Seb, what do you think about this? I completely agree with you. It is interesting on one hand. To see what people are doing with it. Some people. Have a weird. Way of using it. When you think about it. It is not that weird. Everybody thought about this. We were just waiting for someone to do it. I don't think we are surprised at all. This way of using it. It is very fast. Everybody is thinking about Skynet. And the AI destroying humanity. It is really not a surprise. All the sci-fi movies. There is a lot. Maybe it is more surprising. It took so much time. To happen. It is there for quite a few months. Maybe it was not as publicly. Talked about. As it is right now. We knew that. From 20 or 30 years back. We knew at some point. We would try to create this kind of AI. What is unfortunate. Maybe we now have the power. For this AI. To achieve this destruction. Or chaos at least. It was not easy. To train a mother. It is much easier now. And the access. It was very confidential. To some scientists. Anybody can download. And try this AI model. I want to. Say a bit. Something that might sound naive. On my side. This is in movement. We cannot stop it. It will be difficult. In the same way. We can choose not to engage. Meaningless conversation on Twitter. Or fights on Twitter. We can choose to use. The technology. To not use the technology in a bad way. There are initiatives. For example. To have. Instead of news. You have predictions. Instead of Twitter. It can be moderated by AI. And summarized. By AI. It is a choice. That everyone needs to make. It will be difficult. I want to bring something. That I found out. On LinkedIn lately. They are announcing. The Microsoft 365. Copilot. And the implementation. Inside Visual Studio. At first people were like. Great. Something to help us develop. And make better code. People in the comments. Are saying. We cannot use this kind of tools. We know now. All the information. Is now going. To Microsoft. And basically. Using chat. To bring up your IP. Everything that is confidential. For big companies. Cannot be applied to chat. You can see the spectrum. Is shrinking. Like ice in the sun. I think maybe. We are seeing this. A few weeks back. The curve is exponential. Maybe we are. Seeing right now. The curve is going down slowly. By people realizing. First of all your confidential. Privacy or any kind of information. Is not safe. For private companies. That are trying to use them. And especially trying to sell them. If you can to make money. Because they are private companies. And it is their goal. It is not an open source initiative. That is completely innocent. And for the good of mankind. Maybe it will be. It will break. The use of these technologies. Especially in bigger companies. In the near future. The privacy. If you can have a local. Instance of your AI. In that way. You can prevent. From all the not so good use. Of the AI. It is an internal tool. With internal data sets. Like they did in the banking industry. They make their own AI. Intelligence learned from their database. And they are using it internally. Because they have all the confidential information. And confidential knowledge of the company. And in that way. It is more difficult to access. Because it is internal to the company. Fabien? I think I agree. One of the ways. That AI will develop. Is personal models. Personal private models. As you were mentioning. A company can have personal private data. And private chat. Smart companions. Maybe an artist. Can have. Stable diffusion. But trained on his or her art. Only. They can use this. As a tool. Without giving up. His art. In a sense. I agree with that. Trend. The other point. I don't know if you saw. You have a mid-journey subscription. I was still on the free. Free use of mid-journey. Right now it is completely overflown. You can't access mid-journey. Because so many people are using it. This is one point. It is not free anymore. You have to pay for credits. Access to these technologies. Is not as easy. It was just a sample. We got. During the last month. I think this will reduce. The community as well. It is not cheap. Maybe. We will see some slow. The use. Or the exponential. The exponential. Of the technology. Maybe we will slow down a bit. I don't know if you have anything. A lot of things. Maybe people will switch. To a more professional one. They know how to use. The main street. For 3D artists. I think it is just a slowdown. Temporarily. People are just understanding now. What to use. Which platform. I think it will grow up again. Well. I guess this is all for today. Unless you have anything more to say. You are good. We will see us. And you. Next week. Good morning.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}