Welcome to episode 3 of Lost in Immersion, your weekly 45-minute stream about innovation. As VR and AR veterans, we will discuss the latest news of the immersive industry. So let's go, Fabien, as usual. Thanks. So today I want to talk about two displays that are not available yet. And my bet is they will not be available until a while, but they are quite interesting. So currently, most of the display for VR or mixed reality is like a lens that is really in front of your eyes. And there are so two kind of not so new, but still in development displays, which are directly displaying, projecting something on the eye. So that's one. And there are a few companies that are doing that. For example, Lark Optics, which was in the news last week because they had some investment and some innovation around that. And also something even more integrated, which is like a contact lens will go on the eyes. And so there are many, many technical challenges. The first one is more easier, a bit more mature. And the next one, the contact lens is. Still, they have a lot of technical challenges about the display itself, the power, how to transmit data to the display and so on. So my guess that it would not come until like quite, quite a while. And so, yeah, I was curious to discuss that with you. And also to one note that I have is Lark Optics business model is quite interesting is they are positioning themselves as a display provider. It's not a headset provider. They will not do like a full complete headset. They will just be a display provider that other headset manufacturer can use. So I thought it was kind of interesting business placement. So, yeah, we can discuss that a bit. And I'm curious to know if you think it's how it's useful and how likely it is to be that we see that in the next couple of years or five, 10 years. Seb? Myself, I'm not in a hurry for that technology to become available. Lenses quite scares me in terms of technology. You see Black Mirror, movies, TV shows, series. There is one that talk about having a lens with data that directly displayed in front of the eyes of the user. It requires a lot of control for the user, I think. Even mind control. So I think we are, like you said, far from even having something working on that. There is a screen that will come, but then it will miss the fact that you need a good position of the head in the space according to the environment to display something interesting. And adding the camera inside the lenses and stuff like that, yeah, that's a whole other level that they need to achieve before having something really usable, I think. And the first time I think it will be displayed like the Google Glass or something that stays at one place on your retina all the time. I'm not sure that will be enjoyable for the user to have something displayed always at the same position. When you do this kind of test in the HoloLens, for example, following the eyes of the user and always displaying something in front of the eyes of the user, that's annoying after a while. What about you, Guillaume? Yeah, well, I'm a bit surprised for LARC optics, especially because Epson already tried the interpupillar experience with projecting directly into the eye. It was in 2015, if I remember correctly, and they abandoned the project in less than one year because they found out it was, one, very invasive. And two, because of the eye mobility, it was very hard to project always inside the eye. But on the security or safety part, I'm less concerned about this one than the contact lenses. Because when you are going to the optometrist, they are basically projecting lights into your eyes. It's not very comfortable, but I don't think this is an hazard for your eyes. And for the contact lenses, on the contrary, the main issue on my part is the power supply. They are using electromagnetic fields to give them power to display something. And at this point, also if I remember correctly, in terms of electromagnetic fields, it's very high for just displaying numbers or very simple images. So I can't imagine the kind of power you would need to surround the user for them to have complicated displays in their eyes. And I don't want to have my eyes cooked, so I won't be trying this for a long time. So what are your final words about this? What are your thoughts about this, Fabien? Yeah, I totally agree with both of you, actually. I think it could be nice if the safety and privacy and all of these checklists would be checked. But I think we are very far from having something that is usable. And even more for the general public. I think we are very, very far from that. I guess that the best way of doing this kind of stuff without glasses would be to interact directly with the optic nerves. It would be the less painful or complicated things to do. But yeah, we are very, very far from that. I don't know if Neuralink is looking at this kind of stuff as well. I don't know. Maybe. That's another question. And so in the meantime, the 5-10 years that comes, what is your thought about the technology that will come first? I think HoloLens tried the holographic display with a transparent screen, but only projecting also in the eyes of the content. And now we see with the new Qualcomm, most of the players are going through a see-through display where you have a screen and a camera that films the scene. Which one do you think may take the lead or come back in the later years? Did you see something about that? No, I think the current technology still has quite a few years in front of them. I think the main milestone for AR will be maybe in June when Apple will show their infamous AR glasses. We'll see what kind of direction they are taking. And I guess if it's a success, it will give the road for every AR headset that will be released in the future year. Yeah, right now for me, I have the Quest 4 at home and I did a lot of tests of see-through and using this technology. But the screen and the camera that are filming the scene and the way they process the video with two black and white cameras, mixing them in 3D and then projecting the color of the video camera to it. Makes that very blurry and that really decreases the experience when you have 3D objects that are like two worlds that don't match each other. I guess you have to decrease the quality of your 3D media to make it more real. So compared to what you can do with another lens, having a complete view of your environment and maybe cheaper 3D objects, I think for me that works better. Well, one hint I can give you about the Apple headsets is that they bought the company VRVena, which was a Montreal company and I had the chance to try their headsets on in 2014. It's a video see-through headset and the performances were really exciting at the time, so I guess they found out how to make it work. They used to be competitor VARJO and with the power of Apple, I guess they are very close to finding something that can work. By all the company that Apple bought, I guess they are taking this track of video see-through headsets and maybe the VRVena headsets had the possibility to switch from AR to VR in an instant. It was their main adventure at the time, so we'll see, but I guess they are taking this road. I discussed with Bertrand Abbe, who is the last CEO of VRVena, and he's not giving so many details, but he's taking a lot about his headsets back in the day and he says that Apple took it seriously. We'll see, but I guess they are taking the video see-through road. I thought that too. I was wondering what would be the differences between what we see right now and the headsets and I think you also need them. They have the Apple power. Great. It's your turn. On my side, I wanted to continue the discussion we had last week about Meta. Having sold a lot of headsets, but not having a lot of users returning and trying new applications or buying new applications. Right after, I think the next day, they announced moving on to more generative AI development and cutting jobs on the Metaverse side. I don't know. Maybe you can let me know what you think about that. I have a lot to say about this, so I'll try to be brief. I think there are a lot of forces at play, if we can say that. The AI is absolutely exploding right now with all the GPT, Child GPT, GPT-4 today and so on. All the players, Google reacted yesterday actually as well. They announced their own generative AI. Meta has changed their name a few years back to focus on VR and the Metaverse. Now, they are in a difficult position, seeing all the other players moving super fast on the AI, but on their side, having to deliver something for the Metaverse. I'm sure they still have the engineering power to compete. I'm sure they are exploring AI as well, but we'll see how it goes. On my part, on the Meta subject, there are two things. The first one is that the market right now is very emotional. It's driven by emotion, to put it simply. It's not always the best to follow emotion when you are doing R&D, especially. You are just becoming a follower instead of a leader in some tech. It was a bold move by Meta to invest so much in the Metaverse or in VR in general, as for a lot of people it was not as advanced as it was said. They were right on the VR hardware part, as we discussed last week, because I guess hardware is ready for main adoption. But on the Metaverse part, I guess they were wrong because we can see that it's a long-term project. They thought that in less than five years, everybody would be willing to share a virtual world and experience VR as they experienced Facebook back in the day. These are their mistakes, per se. But their main issue remains the same. This is my second point. They really need to reinvent them because Facebook is losing a lot of partners for their advertisement. The population is on the wrong side of age because all the youngsters are not using Facebook anymore. They need to find something to bring back their audience. They thought that Metaverse would bring the younger ones more quickly, but it's not the case. Right now, they just flipped their account to become followers and take the AI bus. But I don't think they will completely abandon the Metaverse like Microsoft. Microsoft also makes this kind of flip with refocusing on Microsoft Mesh, which is some kind of Teams 2.0, 3.0 with VR and some avatar-oriented stuff. Maybe Microsoft is on a better track to the Metaverse than Meta is because they are making a step-by-step following the actual technology. I guess they will lose less money in R&D by doing so. We'll see, but Meta is not in a good place right now. Seb, do you want to comment on what you said? I think what you said is interesting. I think they made the mistake of switching too soon to the Metaverse and spending all that money there. It's all about timing. They forgot also that it's great to provide a way for users to create worlds and interact with each other. But first, you need the interaction. You need to see the user's face and how they react to your motion and stuff like that to make that more interactive and bring more life to the environment. And then it costs a lot to make a 3D environment that looks realistic or nice with interaction and stuff that are interesting to do. Then I think AI can solve this kind of issue and make that faster and easier for a new user to come in just by text and explanation of what they want. Have something directly made by an AI inside the experience. So being two together, you can directly describe the world you want to create. Then it makes something more interesting, more quick, more dynamic. Something that you can share with others. I think that's something a lot missing when you see the apps that are coming. They take like two years to be developed. At least the ones that are nice. And most of the metaverse applications that Meta shared were ugly. Completely with use cases and stuff that are maybe not working. From all the videos I saw, the experiences were really bad. For us, it's a 10-year VR. What we did in the beginning of our respective jobs. That's what you get when you have a first application done by someone new. So that's exactly what they did. They provided a simple tool to create an application. But there is no way to do complex interaction or not directly. It's very complex to implement. You think they reinvented the wheel or lost quite some years because of not trusting the good or the old timer of VR, do you think? No, I think they did the first step. But they forgot to bring something interesting to the user. They need content. Coming with a small library of media. Everyone is doing the same kind of 3D environment. There is no differentiation between the different environments. I guess the thing they are lacking is a community. Simply because you can see VR channels. You can see the VR channels. I guess the thing they are lacking is a community. Simply because you can see VRChat is doing well. They are creating environments and people are going to these. While on the Horizon part, they are making environments. But nobody is going there. Just like this passion or community that can bring content and something interesting to the table. You had a second subject. I guess your transition is perfect as you are. Yes. The new NVIDIA training on instant neural graphics. Someone posted that. I did a small video inside a street in London. They did a shot and managed to get it as a VR experience. You can walk in the street. I guess it is still early. When you move forward, you can see that everything is like particles in space. Not perfectly positioned. That starts to be impressive to generate that in a couple of minutes. Without having the pain of having a 3D modeler doing everything for 5 or 10 days. For me, it is a new step in the generation of 3D-environment. It is really realistic. What is your thought about that? It is a really exciting technology. I am a big movie fan. What I am really looking forward to is... Because this was recorded with only one camera. What I am really looking forward to is to see a movie where I can choose my viewpoint in the movie. I can move around. Of course, pause. Look around. Maybe see a scene from different viewpoints. That is the movie fan who is talking. Of course, bringing that to VR is also a very impressive way to build an environment. It is a really exciting technology. For me, I am a very big fan of 3D scans and all these attached technologies. The main issue I am seeing there is that if everybody is beginning to scan their surroundings, we will have a very huge problem of data management and data storage. I work with the University of Malta. They are doing a full scan of the island of Malta by using different technologies from cell phones to LiDAR to drones. They succeeded in getting the whole point cloud of the island. At this stage, they really don't know what to do with this. They have terabytes and terabytes of data and they can't visualize it properly. If we want to try to use this technology, which I guess is becoming more and more easy to use and it has very good results, we have some very big problems with storage. I guess this is the main issue for a lot of different innovative technologies. Right now, we don't have the power or the storage capacity to manage that. If we want to go on the ecological side, it's not a good thing for our planet to have a thousand more server data centers. We'll have to find a way to make this happen if you want it to happen. Interesting point. To make a transition to my part, I'm sure that as you said on your previous topics, asset generation and strategic planning is very important. As you said on your previous topics, asset generation and 3D generation is the main issue right now to make a beautiful metaverse because we don't have the 3D artists willing to work that many times to create something. The best way to get to something beautiful in 3D would be to use either 3D scans or generative AI. If you give me the permission, I will just add my own topic and we will discuss all the topics at once. On my part, it's not about 3D scanning, it's about environment generation. It's a GPT-like use. Let me start the video. It's a company that the video is like two or three months old. I'm sorry for the not that new news. This is not the newest news. The main use case is that you are typing what you want to see as an environment by just describing what you want to see. Their application is generating the environment for you. It's very convenient for developers like us. I'm not a very good 3D artist. It was always a pain to create a simple or more complex environment. I always had to find someone to help me with it. This kind of tool would be very useful. For people that are saying that the job of 3D artists is dead, I think it will be evolving in the next few years. They still have their creative mind. If you put this kind of tools in the hands of anybody, they will play with it. To create something that makes sense, it requires a creative mind that is an expertise for a 3D artist. What do you think about it, Fabien? I agree with what you said. Even for us, for developers, this is where the type of work is evolving as well. It's a really exciting way to create a virtual world. Now with the GPT-4, which was released yesterday, it will be easier to, for example, instance an avatar that will have its own behavior, but you can also describe which text. Not only coding the environment, but also coding the interactions and how the virtual world is behaving. So, yeah. Also, on the topic that you mentioned about the impact on our planet, AI is also a big use of power. So, yeah, I'm very excited about this technology. And what about you, Stéphane? Hi, Stéphane. It's evolving in all directions. This one, we launched a couple of months ago. What is really exciting for me is that most of the time when we try to do a game like that, we either buy assets that are made for games or if we have specific requests for our clients, we have to ask a graphic that is specialized for games to make the model. And sometimes the client says, no, no, we have everything, and they ask for their own graphics to do something, but they are doing films or movies, and therefore the kind of models that are used to work with are not compatible with what we do. And with this kind of technology, there is also the, I hope, but from what I've seen in the video, that brings a model and assets that are already compatible and generated, maybe even made to use the same texture, etc. to make the model run, the games run faster without having the pain to set up everything. So, yeah, that's a huge amount of time that we can win on that part. And like Fabien said, if we can add interaction, that starts to be a really powerful tool to create content. And as you said, then we need creative minds to be able to challenge that kind of AI and make creative content on top of that. And to have the knowledge on how to write down for the specific tool that will be used, because for each tool there is specific wording and specific key messages that you have to learn. It's a new language that you have to learn to make that nice. You can have quickly something, and have creative content that matches all the other content. And you really have to specialize yourself and learn how to write stuff. So the result is exactly what you want. Yeah, we can already... Sorry, sorry. English is becoming a programming language. Yeah, I was just saying that we can already see that, like with Mid-Journey, for example, we can see that when an artist is giving the order or it's a simple people that are... Normal people, sorry. Giving the order, they are just specifying the type of lenses and the light and the kind of style they want. And I guess they can achieve something very awesome faster than anybody in the world. So that's what they would be paid for, to bring these old technical terms to the prompt to make the best idea. But yeah, I think that maybe the more executive part of the job, which may not be the best, the most interesting part of the job, will maybe disappear. And it's for the best, I guess, for the artist, because they can focus on what is fun for them. It's the same for developers. I'm just driving to this alley, as well as Fabien brought the subject. But for very repetitive, simple code, I guess that AI would be perfect for that. And it's not bad news for us because it's really not an enjoyable part of our job to just make technical code. It's better when you're doing something creative and innovative. And maybe some developers that are liking this kind of structure, very automatic kind of code will have to maybe find another way to work. But I don't think this is as horrific as we are seeing right now when everybody is panicking by things that developers, artists, writers, and even teachers will disappear in a very short term future. And I guess, as always, innovation is bringing change, but it's not a bad thing for people. We just have to adapt, and we are very good at adapting. So we'll find a new way to make all these people work in a different way. Mine is to build between the different AI tools. Yeah, I'm a bit less optimistic because, indeed, every time that there was a new technology like the television or the radio before, and so on, there was changes in the society. But I think with AI, the speed of it is so fast that I'm a bit more worried than you, I would say, on the impact that it will have and how fast we will have to adapt. So, yeah. But I think it's also more personal and up for debate, I guess. Well, the main issue with AI is the ethical part, which is, as you were saying, brought to the table very, very fast because people are finding that AI can have some issue with this. One article I read about AI and I found very interesting is that maybe the danger of using AI too much is by limiting our power to reflect by saying that AI is always right and then you can just stop thinking by yourself and say, yeah, I said it, so it's true. And the danger of it is that we shouldn't forget that AI is based on learning and we can make it learn what we want it to learn. It's not a global knowledge. If you want your AI to think in a specific way, you just have to give it specific content to learn and it will be giving it back. So it can be either just everybody will think the same way or AI will make you think a way that can be given to you. So, yeah, this is more the ethical part that is frightening right now. But on the technical one, I'm more on the, as I said, the emotional part we are right now. You know, on the appropriation curve, we are on the peak right now and everybody is crazy about it and talking about it. You can't go to a meeting without someone bringing chat GPT on the table. So I guess we are on the emotion part right now and we just have to calm down when people will, people are, everybody is now playing with chat GPT. They are finding that very funny or interesting or whatever. And maybe in a few months they'll lose interest in it and we can have this more stable or calm way to approach things. Yeah. I think the peak is still not at the highest top. I think it will go higher. It's moving so fast and we see bridges with different technologies and new demo everywhere. And there's still a lot that can be done, I think. So I think we are not at the peak yet. There's still some stuff to do with chat GPT and new AI systems that are coming. So do you think we can reach a peak? That's the question. Or every month we will have something new that can bring the peak higher in expectation? We'll see. That's possible, yeah. That we have a singularity. Exponential peak. And then the president is AI. The world leader. Some journalists talked about this in Canada. They said, well, we can cut the parliament in half because half of them are not doing so well so we can replace them with AI. It will be the same. It was a joke on the joke side. I saw another thing. They made a completely generated discussion with the Prime Minister of Canada, Justin Trudeau, and Joe Rogan, which is not the kind of interview he would be doing. But the whole thing was generating with their voice and their state of mind. And it made complete sense. If you were listening to this audio, it was completely natural. Nothing in this would make you think that it was fake. And that's a danger as well. We can see the ethical one. We can make a Joe Biden fake conference bringing something terrible. And people would believe that. It's a very dangerous way of using AI Yes, the deepfake. The deepfake has become very, very real. I guess this is our mistake as well because we saw that coming in the last 10 years. We saw that AI was becoming more and more efficient and people just let that go without thinking that it can be very fast at some point. Yes, there are AI alignment companies and research laboratories that are working on this. But they are underfunded compared to the billions that OpenAI and others have. I think we could talk about that for a very long time. It's getting down to an unhealthy competition. If they do this, then others have to do it. We'll see. We'll end this stream and continue this conversation next week. Thank you all for your great topics and your discussion.

Lost In Immersion

Episode #{{podcast.number}} — {{podcast.title}}

Transcript

Show Transcript
{{transcript}}

Subscribe

Spotify Apple Podcasts Google Podcasts Amazon

Episodes

#{{ podcast.number }} – {{ podcast.title }}

Credits

Podcast hosted by Guillaume Brincin, Fabien Le Guillarm, and Sébastien Spas.
Lost In Immersion © {{ year }}