The State of AI with Rowan Cheung

Mark Zuckerberg on AI Glasses, Superintelligence, Neural Control, and More

The Rundown AI

Meta just unveiled its latest AI glasses at Connect 2025 – including Ray-Ban's with a built-in display, controlled by a band that reads muscle signals.

In this exclusive conversation, Rowan Cheung (@rowancheung) sat down with Meta CEO Mark Zuckerberg (@zuck) to unpack:

  • The timeline for glasses replacing smartphones
  • How the Neural Band will personalize to you over time
  • Building "personal superintelligence" while raising kids
  • Why Zuck rebuilt Meta's AI lab within 15 feet of his desk


__

Join our daily AI newsletter: https://www.therundown.ai/subscribe
Learn AI hands-on with our AI University: https://rundown.ai/ai-university/

Somewhere between 1 and 2 billion people in the world to have glasses for vision correction today, within 5 to 7 years. Is there like any world where those aren't all replaced with eyeglasses? The neural pad is going to be personalized to you over time. It's not actually picking up motion. It's picking up your muscles, firing. A lot of people who are designing technology, they make something that's kind of bulky, and they add a bunch of core functionality to and they're like, why don't people want to wear this? It's kind of a mantra internally that we have, which is this idea that AI is going to be the most important technology in our lives. We want to make sure that AI is in service of people, not just like something that sits in data centers and automates stuff. There are rumors that you're, like personally going founder mode calling researchers. What drove the decision for this llama for was not on the trajectory that they thought it needed to be on, like, why am I going out and meeting all of the top AI researchers? It's like, well, like, I want to know who the top AI researchers are, and I want to, like, personally have relationships with them. I moved everyone around me who sat with me, and now the lab is there. So, I mean, I'm pretty hands on every time I think of what a milestone would be in AI. They all seem to get achieved sooner than we think. Thanks so much for being here. Yeah. Good to see you. Thanks for doing this. So today we're talking everything connect 25. Can you give us the rundown of everything you're announcing and what you're personally most excited for? Yeah. So the main things that we announced at connect were our fall 2025 line of glasses and, you have them here. So, so we can just start and go through them. I mean, the first one is the next generation of Ray-Ban meta. So that's sort of the classic eye glasses that we've shipped so far. They're some of the most, you know, fastest growing consumer electronics of all time. We're very happy with that. And a lot of the improvements that we have for these are, you know, we doubled the battery life. We we have three K video resolution in them now for, for capture. And we're introducing new AI features like this, this thing conversation focus which, allows you if you're in a loud place, to basically turn up the volume on friends who you're talking to. So if you're in a restaurant or something like that, that I think is gonna be really neat. Then we've got this guy, which is the Oakley Meta Vanguard. So this is our second collab, with Oakley. It's, it's more of kind of a performance glasses. And, these I think are really cool for a number of things. And they're designed for, for a number of things around sports. You've got the camera that's centered. So that's great for alignment. It's wider field of view, louder speakers. They're they're water resistant. They can, you know, just just a bunch of that, they can connect with your Garmin watch so you can, you know, you can be running a marathon and, and basically, you know, like, you can tell it every mile, capture a video and then at the end, it'll produce a video for you where it stitches them all together and puts your stats from Garmin on top of it. So I think that that's that's a pretty neat thing. But I think the most interesting thing by far that we announced is this one, which we call ribbon, meta ribbon display. And that is because it is the first eyeglasses that we have shipped that have a high resolution display in them. And the other big breakthrough is that they pair with, with this guy, the, the meta neural band, which is the first mainstream neural interface, that we're shipping as the way to control them. And I think that that's pretty neat. I mean, basically every computing platform has its own new input method, right? So when you went from computers with the keyboard and mouse to phones with touchscreens, you kind of got a completely new input method. And the same, I think is going to be true for glasses and having a neural interface where you can just, you know, send signals from your brain with micro muscle movements like this. I mean, this is basically about as much movement as you need to make. And you can, you know, enter text, neurally. just do all the, all the fun stuff that you got a chance to try out. But but I think that this is a pretty big breakthrough. So I'm very excited about this, but but but overall, I mean the whole lineup is good. So so it was a it was a fun connect. So you had the Ray-Ban metas for kind of every day. The Oakleys for athletes. And then the displays now for power users. How do all these glasses kind of tie into that personal superintelligence vision. Yeah. So I mean, our theory is that glasses are the ideal form factor for personal superintelligence, because it's the only real device that you can have that can see what you see. You can hear what you hear, can talk to you throughout the day and can generate a UI in your vision in real time. And so there are other devices that people have. I mean, obviously you can have some AI on your phone, you can do some AI on your watch. You could do some of it if you just have like AirPods type type thing. But, I think glasses are going to be the only thing that can that can do all of the pieces that I just said kind of visual and audio in and out. And that that I think is going to be is going to be a really big deal. So I got to try it for this interview. Yeah. And I have to say, there were way more many use cases than I initially thought there was going to be. Like the AirPods, for example, have the live translation, but this is live translation. Yeah. Yes. The captions, there's so much more. Yeah, the subtitles are awesome. It's awesome. So I'm curious, like, what are your favorite kind of use cases that you've been using them for? I mean, the thing that, that I really had in mind when we were designing them for is just sending messages, right? I mean, this is the thing that I think we do the most frequently on our phones. I mean, I would guess that, you know, we all are sending dozens of messages a day. I don't really need to guess. And we run a lot of the messaging apps that that people use. And so I think we know that people spent sent dozens of messages a day and, we wanted to make it so that the experience is just really great. So now with the glasses, a friend texts you, shows up in the corner of your eye for a few seconds. It's off center, so it doesn't just block your view, it goes away quickly. It's not distracting, but if you want, you can just respond as easy as, you know, moving your finger like this. And you know I can, right? I mean, I'm up to like 30 words a minute typing with the neural text. So I think this is I think this is great. one of my kind of mental models for this is that, you know, in-person conversations are awesome. The one thing that I like about zoom is that you can multitask, right? It's like if we're having a conversation and you say something, I want to get some context, on what you're saying or if you remind me that I should go do something or, oh, you bring up a person and it's like, oh, yeah, I've been meaning to reach out to that person. Like if we're having a live conversation, I'm not just going to do that right then and then the chances that I'm probably actually going to forget by the end of the conversation. I follow up on most those things on zoom now. I kind of feel like you can just kind of text someone off to the side and that's that's fine. And this, I think is going to bring that to physical conversations, to where it's like, you don't have to break your presence. You don't have to like, you know, it's not it's not rude. Right? You're not like, breaking out a phone. You're not really. I mean, you can continue to really pay attention to the person at just like a very quick gesture with your wrist. And, I know I keep on having this, this, this phenomenon where there's, like, some information I, I want to have in the middle of a conversation. And normally otherwise, I would have had to wait till the conversation and text or call the person to get the information and go back to the person I was having the conversation. Now I just get the information. In the middle of the call. I just text the person, information comes in and then it's like, okay, you can have a much better and more informed conversation. So texting I think is really going to be the big thing. So how does a texting feature work? I know there's like you can talk to me I but you're saying you can actually write it down and it automatically goes to the phone is that gets the glasses to okay. So yeah. Yeah. With the neural band. So like you just really just handwriting. Yeah. I mean it's you can kind of think about it like as if you're holding a small pencil and then what, like what motion you would make to write out certain letters. But then what's going to happen is that the neural band is going to be personalized to you over time. So it's going to learn. Okay. How do you make W or J differently from someone else? And it will learn and you will kind of co evolve with it to the K to learn with the most minimal version of you making a J is or no or whatever. And, and then the feedback loop around that will, you'll just be able to make like increasingly subtle movements over time where it's not actually picking up motion, it's picking up, your muscles firing. So over time, I think you're basically going to be able to just move your muscles in opposition to each other. Not even really move your hand and, and be able to send messages or control a UI with your hand at your side in a jacket pocket behind your back. It's not. I mean, this isn't, it's not hand recognition. It's not, it's not the cameras on the glasses looking at what your hand is doing. It's this band picking up micro signals that your brain is sending to your muscles before they even move, so you don't actually have to move if you just need to, to be able to send the way to send the signals to it. And it picks it up. And yeah, that's so the wristband reads the electrical signals before you even move. Yeah. Why is that important over. Just like adding buttons to the glasses or like improving the voice commands. Well, I mean, okay, for voice, some of that, a lot of the time you're around other people. Right? So like, I just think that you want a way to control your computing device that is private and discreet and subtle. And one of the things that we thought about was, okay, can we get whispering to work? Because, by the way, voice does work too, right? You can control the stuff with voice. You can do voice dictation. You talked about AI with voice, but we just I just thought that there needed to be a way on top of that. That was in voice, for for when you want to do it more subtly and privately we could be having this conversation. I could, you know, have the glasses on and, like, a message, it comes in and. Okay, we're talking and I'm like, writing it and I've sent. It's done. I just I mean, in that time I could have sent a ten word message and that would be fine. That's so great. Okay. Then it's like, all right, now I got my response back in and okay, here, like maybe I'm writing another response and whatever. Yeah. Okay. Done. So, I think that that's going to be I think it's gonna be awesome. But in terms of buttons, I mean, yeah, that works too. I mean, you can you can take a photo by by tapping the button on any of these, you can hold it down to take a video. You can, you know, skip to the next track with the swipe or you can, you know, start or stop music by tapping on it. But, but I think the point is you don't want, you don't necessarily want that to be the only way that you could do it. Like now with the neural band, I can also just be like, I don't know, start and stop music with it in my hand, on my side. It's like, all right, like, it's like it's very subtle gesture. Like you can't even tell, right? Yeah. We were joking in the demo room that it's almost like right now when you go to classes, all the teachers are like, oh, put your phones down, they can see you. And like, students are like just checking their phones, distracted. But now it's going to be like, hey, like, keep your thumbs out. Almost, because it's like, I think it'll be take your glasses off or take your glasses off and show your thumbs. Maybe both. Yeah. But, yeah. So looking ahead, I guess, what you're saying is, like typing and speaking might not be enough to give you enough data to I. What you've been typing and speaking might not be enough to give. Well now we have electrical signals. We can also give to, to the. Yeah. For the UI. Yeah. For controlling the UI. Do you think like, the neural band is almost like a new interface to it? I, yeah, I mean, I think it's it's AI and it's the glasses. I mean, there's some parts of the glasses that are like, okay, you just have a menu of your apps and you basically can just, like, swipe through it very subtly with your hand and it's like, okay, now I've selected okay, now I've picked up that, you know, it's I mean, this is basically what the interaction looks like is like effectively it's like, okay, swipe a couple of times, I brought up the camera. Oh, now I have a viewfinder so I can see exactly what the what the image is. I want to zoom in. I just turn my finger a little bit. Actually, that's one of the coolest is when you're playing music. The way that you change the volume is you just pretend that there's a volume dial and you just turn your wrist and it's like, it just feels like magic. It's like really fun. The way you can bring up meta AI anywhere, you just tap twice. It's like. But but it's like. But I mean tiny, right? It's like the kick be your side. Like no one would even notice. So. Yeah. No, I think it's, I think this is just going to be by far the most intuitive way. But if you don't have it, yeah, you can control them with voice. You can control them with, with, with, with the buttons. I mean, different ways to do it. Yeah. So it's going to be both the AI and the, the UI of glasses and potentially over time, other things beyond glasses. I think that you'll be able to control with the neural band. That's really fascinating. Yeah. How long did it take you to pick up the controls? For the neural band? Yeah, for the neural band. Not that long. I mean, it's pretty intuitive. I mean, the way that you kind of think about it is, navigating. It's kind of like a D-pad. You kind of swipe your thumb right, left, up, down. You bring up the, the, the menu by kind of, you know, you can tap twice. You bring up that AI by tapping twice, you type as if you have a there. Enter text as if you have a pencil. The trick is that when you're learning upfront, the motions are a little bit bigger. And then very quickly, within a few days of using it, you get to like very subtle movements. and the reality is, is that a lot of the optimization isn't going to be on our side or your side, it's going to be the machine learning system over time, getting personalized to you and getting better. And, you know, it's the the times when the the neural text entry is slow. It's because I make a mistake and the autocorrect doesn't fix it. So now I have to go back manually and re enter the text. But yeah, I mean this is just like standard machine learning stuff as this just gets better and better and learns your patterns better, that just it's gonna be able to autocomplete more of what you're doing so you won't even have to make all the motions. So you've been calling classes kind of the next platform for years. Yeah. But the original or even met has really took off with just audio, and I, what did you guys stumble on there? So we have a few principles and building glasses, and the three principles that we talk about the most are they have to be great glasses first. Right. So you have to think about, okay for any of the before you even get any of the technology. You if you're going to wear these on your face all the time, they need to be good looking. They need to be light. And these are right. I think a lot of people that design, who are designing technology, they make something that's kind of bulky and they add a bunch of cool functionality to and they're like, why don't people want to wear this? It's like, well, most of the day people aren't taking photos or something, right? It's I mean, most of the day it's just there. And the magic of these is that they're good looking, comfortable glasses that you want to wear, and then you have them on your face. So when you do want to listen to music or call I or whatever, it's it's available to do that. So good looking glasses first. The other big principle is that the technology needs to get out of the way, right. So I think a lot of people, when they're designing technology, they, you know, they have this impulse to kind of add a lot of flourishes and, make it so that the thing that they've built, is kind of center stage. And, one of the things that I keep pushing back on with our team is no, actually, look, especially if you're building glasses, it is extremely important that whatever technology you have, it is there when you want to use it and then it gets out of the way. Right? You don't want random stuff in your vision, right? It's like, I don't care how cool of an animation you made on the design team, right? It's like like people, we want this to be super minimal, right? If message comes in, you see the message, it goes away when you don't want it. that I think the, the classic design just distill those principles. So yeah, I think it was kind of a hit for those reasons. The third principle, for what it's worth, is take superintelligence seriously. So it's a it's kind of a mantra internally that we have, which is, this idea that AI is, is going to be the most important technology in our lives. And, we want to make sure that AI is in service of people, not just like something that sits in data centers and like automate stuff. And, so we design these, even the early version, the first version of this, to be able to just have easy software updates to have improvements like the conversation focus thing that we talked about. So that way you can get like, can I have this great accessibility feature, you're in a loud place. You can turn up the volume on a person, right? It's like kind of crazy, but it's like, okay, so you build in the sensors first that you think that the AI is going to need over the next couple of years, and then you make it so as you get the software working, you can just update it so that we can bring the latest AI technology to two people, make them smarter with a software update. Well, I'm a big fan of personal question. Yeah. How far away do you think we are from these glasses being so good? You just replace your phone? So I think about it a little bit differently. I think about what is your main computing device right now? Phones are your main computing device. It's my main computing device for sure. But I didn't get rid of my computer, right. I just I have my computer, but I just use it less. Even a lot of the time. I'm even at my desk, and, my computer's right there. But if I want to go do something, I just take out my phone because that's like my primary thing that I do. So what I think is going to happen here. Is that I don't think we're going to get rid of our phones anytime soon. I just think our phone is going to stay in our pocket more or in our bag. And, even very subtle things like, I don't know how many times a day I used to look at my phone to see what time it was. Now, I don't do that when I have the glasses. You just tap quickly. The clock comes up done right. It's like, okay, I don't need to take out my phone to look at the time, okay? Now, like I don't need to take out my phone to look at messages, right? It's like, okay, now with these, the camera's great. But now with the viewfinder, I know, like, exactly what I'm capturing. So, so just, I think you just go kind of use case by use case and all these things I would have used to have taken out my phone for, I now don't, I think that's sort of the way it's going to go, but I would be surprised if. If we kind of get rid of our phones anytime in the next five years, I just think it's going to be less and less use slowly using the phone less. Yeah. I mean, the GPS feature is another one, right? Instead of having to look at my phone constantly, I'm going to use this like all the time. Yeah. Just have it in my glasses. Yeah. Navigating me where I'm wearing to go with the lines. Translation even was useful. Yeah, there's a lot I'm curious to see how other people are going to be using it, but, I think and I think people are going to be really curious about is the metaverse. How much of the vision is still distilled in these products? Well, I think we're getting closer to it. Right. So the metaverse is obviously it's a very visual experience presence. There's some parts of it like stereo audio, which, I mean, you can 3D audio which you can, you can get that as great. But, you know, starting to get the display, and then, you know, the Orion prototype that we showed last year, it's a wider field of view. It'll be a more expensive product when it's, when it's ready, but I think you're basically going to get to, you know, this is one where you can you can have a hologram. You can you can kind of have context. It's enough to kind of watch a video or have a message thread, but this product, especially because it's monocular, it's not putting 3D objects in the world. Right. So there still are things around delivering the sense of presence, like you're there with another person that I think is going to be really magical. And when we get the consumer version of Orion, you'll you'll really be able to, to start doing that. So I think we're kind of inching our way there. But this is a big breakthrough. I think the first kind of high resolution display with the neural band, and it's going to be a good step for learning how that goes. But yeah, no, I think the vision is that all of the kind of immersive software around presents that we have for VR, we would like a version of that to run on glasses, too. I want to switch gears to the the Vanguards. Yeah. The other fun, really stylish. I love to design, a recreation. We go on runs and on cycle. So I am using those. Well, how did you build these with athletes in mind? Well, I think a lot of people on the team, are pretty intense athletes, and and some of it is, you know, we just use the Ray-Bans, and, you know, I can't tell you how many pairs of Ray-Bans I fried taking them surfing. But it was worth it. But they're not water resistant, so. I mean, it's pretty obvious, like, you were like, okay, we want something that's water resistant. The extra, sound, the kind of power in the speakers. These are six decibels louder is really valuable and helpful when you're doing loud things. So, if you're cycling at 30 miles an hour and it's windy. The other day, a few weeks back, I was taking a call on a jet ski, and I could hear the person completely fine over the engine of the jet ski. And, and we have this advanced, like, noise reduction in the background. So we've designed these. It's got the microphone in the nose pad, which basically means that you could be in a wind tunnel and the other person on the other end of a phone call would not be able to hear any of the background noise. You just come in crystal clear. So, I think stuff like that is awesome. Then, you know, pairing it with, and then there's all this other stuff that I think you want for sports. The video stabilization is good. The pairing with Garmin and Strava, I think is good. We put an LED in the corner, so that you can have it light up when you're, to give you a reminder if you're to, to make sure you stay on pace that you want or in your target heart rate zone. So yeah, I mean, there's a lot of there's a lot of things like that that I think are going to be are going to be pretty neat for athletes. Yeah. It's fascinating. Why do you think it was important to design so many different glasses for different lifestyles? And how much broader do you really expect to go from here? Well, I mean, I think people have different styles, right? So, this isn't like a phone where everyone's going to be okay having the same thing and just like, can maybe have a slightly different color case or like put a different sticker on my case and then I'm good. Like, I think people have different face shapes. They have different esthetics. Like it's I mean, people wear different kinds of clothes. I think this is like an important part of our identity and sense of style. And some of it is functional too. I mean, some of it is a little more lifestyle, some of it is more active. And, you know, part of this is, some people prefer thinner frames, some people prefer thicker frames. You will always be able to fit more technology and thicker frames. So as we get to more display, like more advanced holograms, it will always be a little bit thicker of a frame than what you can do for the thinner ones. So I think that there's a choice that people will make around okay. Like, do I prefer the esthetics and the kind of the simplicity of the thinner ones, or do I like the esthetics of the of the thick ones? It's unfortunately, fortunately, kind of big glasses are in style so that that kind of work. So there's that, there's the whole, spectrum. And then there's also the price point in affordability. Right. And less technology. You're always gonna be able to offer for, for, for a more affordable price. And that's going to be great too, because we want to make it so that, you know, there's like more than a billion people. It's somewhere between 1 and 2 billion people in the world to have glasses for vision correction today. And I don't know, I think within, you know, whatever it is, 5 to 7 years. Is there like any world where those aren't all replaced with eyeglasses. And it feels to me like it's kind of like the iPhone came out and everyone had flip phones, and it's like just a matter of time before they're all smartphones, right? So I think that that's going to be the case with glasses too. But people have different glasses and they want different glasses and they want different glasses at different price points and style. So our goal is to just, work with a lot of iconic brands to do some, some great work. So on the Vanguards, you mentioned the Garmin integration. I thought this was fascinating. Yeah. Because this basically allows AI to kind of see your heart rate pace, location, everything kind of all hands free while you're running or cycling. How long do you think it will be before we have like a personalized coach that's there, like proactively in your ear? Yeah. I mean, it can do that a bit. So I think, for running, for example, if you want to ask it what your heart rate is or what your pace is, it can answer now. I mean, I don't know how hard you run, but when I'm running, like for performance, I don't really want to talk to something. Right. It's like there's you're probably above that heart rate zone. But there are times when I'm like skiing really fast and I want to know how fast I'm going because that's kind of fun. And, you know, you're not necessarily out of breath doing that. So, yeah, I mean, part of it is being able to talk to it and it can give you the stats in real time. That's one of the reasons why we built the LED in is, we want to make it so that, you just have a very simple visual indicator of, okay, am I on my my kind of pace target? Am I on my heart rate target? But I know you can kind of get a sense of, like, over time, it's going to be pretty good to have a display in those, too. Yeah. I mean, the real time coaching is. Yeah. So, like, I guess for an example, if I'm running and I want to say in zone two, which is, yeah, below a certain heart rate, I don't wanna go to too hard because I want to stay in the zone. I guess my question is, will I be able to have a coach that's telling me, oh, hey, slow down a little bit. You're all too fast. Yeah, that's the goal of the LED to start. And if you're trying to stay in the zone two, you probably can't talk to it. Right. Because I mean that's kind of the definition of zone two, right. It's like you should you should be able to. Right. So it's I don't know if it's the definition, but it's a proper answer. Yeah. Yeah. So, so you can, you can, you can basically you can ask at different stats and, and it knows if you're connected to Garmin. Yeah. So I know you've been training a lot. Have you like found any good use of it too in your training as, like almost like a coach or. Well, I take them surfing and they don't break, which is nice. There we go. Yeah. So, there's the ravens are good. But like I said before, I've, fried a few pairs of those, okay? And they captured moments, like, in real time too, right? Yeah. I mean, one of the nice things about this is because the camera is centered. It's like, really perfectly aligned. I mean, the image really in the video feels like it's coming exactly from, from your perspective. It's nice even when the camera's off to the side. But it's like these things about delivering a sense of presence. They kind of have this feeling where it's like when you get it to be perfect, it really feels like it clicks. And if you're like two degrees away, it's still pretty valuable. But it's not. But there's something about having it in the middle that I think is really quite compelling. Okay. And wide field of view. I'm excited to try them out. Yeah. So you're raising three kids while building superintelligence. I'm curious what conversations you're having right now about what? What was the about to grow up in, Well, I mean, it it comes up in a bunch of different things. But, in our family, we build a lot of robots. I, I'm really into 3D printing. I think that's like a really fun DIY hobby. My daughters are really into it. They're like making stuff. Kind of got to the point where I'm like, all right, you need to design your own stuff. I'm not just going to print stuff that we find on the internet. So now they're using all the different tools to to do that, and I work with them on that. It's like a fun hobby, but in a bunch of ways. I kind of think of robotics as like the thing after you get I mean, obviously they'll they'll kind of develop contemporaneously to some degree. But, but I think we're gonna have a really smart, kind of just kind of intellectual eye. And then you get the robotics, which is going to be the kind of fully embodied version of it. So, so, yeah, I mean, there's there's not that much that, you know, the kids could do with, you know, developing AI at their age, but you can actually, they can and they just enjoy making stuff. So, I mean, it's like kids love 3D printing and it's like, not very complicated. And it's just a fun hobby. And so yeah, so we have this project going on right now. First I just ordered a bunch of robots from the internet that you could assemble. That because there's like a bunch of starter packs and things like that and all of that. And then, the next one is basically designing our own and basically just like, you know, okay, 3D print the shell is you have the, the, you know, whatever the, you know, you do a version with the Raspberry Pi or the, the, the Jetson, you know, and just like, get that to, to to work pretty well for, for running language models and doing interesting stuff with it. Yeah. That's, that's a step up over Legos. That's fun. Yeah. Yeah, yeah. No. That's true. It's nice because Legos are, You run out of blocks. Yeah, right. But true. But with, like, the internet, you don't you don't run out. Are there any, like, specific, I guess, traits or skills you're trying to teach your kids? So, for example, Dennis Hassabis, he said learning how to learn is a really important trait is I think you're thinking of teaching, I mean, I mostly try to teach our kids to be good people. I mean, I guess there's like the there's all the intellectual stuff they're talking about, like, what's interesting, robots or whatever. But, I don't know. I mean, I think, like being caring and kind and things like that, I think are really important. So just a, a bunch of values type things like that. But yeah, I mean, I, I agree with Dennis's answer too. It's less about like learning a specific thing and more about kind of learning how to go deep on a thing. So I'm I'm a big believer that. So a lot of people say some variant of like, oh yeah, you want to learn how to learn but okay, but how do you do that? Right. And I guess my theory on how you do that is basically a very depth first approach. So rather than just like trying to have some conceptual framework, you complete a project, right? So like you will learn a lot by building a robot, like, whether you, actually care about robotics after that is completely incidental to, like, you took a problem with decomposed into all these things we needed to figure out, like the design for the 3D printing. And she needed to get the printers to work and then like, and then we needed to, like, get the Raspberry Pi to work, and it's like, okay, we're like learning a lot about wheels right now, right? It's like, okay, we're going to do treads. Are we going to do wheels? Like, how many wheels? Like, how do you want to, you know. So and that stuff, I think you basically it's like you're learning how to decompose problems, how to debug things. And that's like you just learn by doing. So that's, that's kind of my, my philosophy on it. Yeah, I like that. And I think just doing stuff we have common interests. So yeah, you know, it's like sometimes we, build a robot, sometimes we watch K-pop doing hunters, you know, it's like you got to have fun. Okay. Okay, so when we do achieve superintelligence, these glasses are going to see everything we see here. Everything we. And now even some of our muscle impulses. What human traits should we fight to preserve? And what should we just let go? Well. I'm not sure I agree that they're going to see everything. Or maybe there will be like some sensor that does, but they're not necessarily going to retain it. I'm not sure that you I mean, do you want them to, but so that's a whole separate direction that we can that we could go into. But I think just kind of like being able to pick out the salient bits are important, and giving people control over that is important. But, but that's not really the core of what you're asking. So the, I think like to me creativity is very important. And, like having a sense of here's like what you want to make in the world, And like, having an idea of how you, I mean, it's, it's some of the themes that we just talked about, like, how do you decompose that? What are the steps to do that? What are the tools that are going to be most helpful in doing that? I think is part of the job of a creator or a builder is to, like, be a kind of master of the tools that are available to them, right? If you're not, then you're not going to kind of be at the edge of, of your work. You know, it's a that's a competitive world. So, yeah, I kind of think. I don't know, I think the, the, all of the AI progress, I think has been very fascinating because people have sort of conflated intelligence with, like an intent or like a desire to do something. And I think part of what we've seen so far from the AI systems is that, you can actually separate those two things, right? Like the AI system has no impulse or desire to create something. It just it's kind of it sits there and waits for you to give it directions, and then it can go off and do a bunch of work. And so I think the human piece at the end is going to be, well, what do we want to do to make the world better? And I think some of that will be around like a personal kind of creative manifestation of what you want to build. But I do think some of it then, and I think in our industry we probably underplay this a bit, is just like caring about other people and, you know, taking care of people and spreading kindness and like so I think that that stuff is really important too. And I think I will help with that as well. So this conversation wouldn't be without obviously, the superintelligence labs that you've you've been on a spree over the last couple past months and all over the news. Yeah, it's been busy. Yeah, yeah. When did you decide to kind of make that change from the outside? There are rumors that you were, like, personally going founder mode, calling researchers, emailing all that stuff. What drove the decision for this? Yeah. I think I mean, it was largely lama for was not on the trajectory that I thought it needed to be on what I mean, Lama for was in many ways a big improvement over Lama three. But like, we weren't trying to be a bit better than Lama three, right? We were basically trying we want to build like we're a frontier lab, right? That's who wants to be doing leading work, right? That, so I kind of felt like. I learned enough about how to set up a lab that, that I wanted to kind of reformulate the work that we're doing, with a few basic principles. So one is huge focus on talent density, right? So you don't need many, many hundreds of people. You need, you know, 50 to 100 people. It's a group science project who can basically keep the whole thing in their head at once. Like, seats on the boat are precious, right? It's like, if someone is not, is not, isn't pulling their weight on that, then that's like it has this huge, negative effect in the way that it doesn't for a lot of other parts of the company. So I think like just this absolute focus on, like having the highest talent density in the industry, drove a lot of what you heard about. Okay. Well, like, why am I going out and meeting all of the top AI researchers? It's like, well, like, I want to know who the top AI researchers are. And I want to, like, personally have relationships with them. And, and I want to build the strongest team that we can. So that was that was a big part of the focus. There are other parts to like, I think when you're doing long term research, one of the principles that we have for the lab is we actually don't have deadlines that are top down, right? It's like it's research, right? You don't know how long the thing is going to take. All the researchers, you know, everyone's competitive. They all want to kind of be at the frontier and doing leading work. So, you know, me setting a deadline for them isn't going to help. It's not it doesn't help them at all. Right? It's like they're moving as fast as they can to try to understand, the, the kind of what we need to solve the problem. So, but that's, that's kind of. Yeah. So we organize the lab to be very flat. We don't want layers of management that are not technical. Right. It's a the issue is people start off technical and then they go into management and then six months or a year later you like still think you're technical, but you're actually like haven't been doing the stuff for a while. So you're the, the knowledge just kind of slowly decays or quickly decays in a, in an environment that's moving as quickly as that. So anyway, those are the basic principles. But I think that overall, taking a step back, I think that AI is going to be the most important technology in our lives in our lifetimes. Building the capacity to build absolutely leading models, I think, is going to be a really critical and very valuable thing for, for unlocking creativity, building a lot of awesome things. So, yeah, I'm absolutely just going to focus and do the things that we need to do in order to, to to make sure that we can continue to do that. How hands on are you still with the lab? Well, I'm in the lab. I moved everyone around me who sat with me, and now the lab is there, so. I mean, I'm pretty hands on. That's like. I mean, I mean, you the chief scientist, you know, sits right next to me. So. And, you know, a lot of the, the other teams sit within 15ft of me. So, so, I mean, what would I say? I, I mean, look, I'm not an AI researcher, right? So it's not like I'm, like, in there, like, telling them what research ideas to to do the things that I can do is as CEO of the company, I can make sure that we have the very best people and talent density. I can make sure that we have by far the highest compute per researcher, and that we do whatever it takes to go build out all that capacity. I mean, you probably saw the announcement that we had we're building this, Prometheus cluster, which is I think it's going to be the the first kind of gigawatt plus single cluster that's like a contiguous cluster for training coming online next year. And, we're building a, cluster in Louisiana that is, it's going to be five gigawatts, and we're building several more clusters like that that are that are just going to be multiple gigawatts. So, yeah. So I mean, that's that obviously takes some conviction. I mean, we're talking about many hundreds of billions of dollars of capital. So you both need to have like a good business model that can support it and you need to believe in it. So I, that's that's one thing. And then like, there's all these other parts of running a company where, I don't know, I mean, like big companies have positives and negatives, and it's sort of my job to make sure that we channel all the positives that, that we can towards this effort. And then I can clear as many of the kind of things that would otherwise be annoying about running a company out of the way. So, you know, when we build something awesome, we'll get in our apps and get it to billions of people quickly. That's a good thing that you can do inside a big company. And then like whatever the other kind of things, that would be more annoying. It's sort of my job to clear them out of the way. So how do I do that? Well, I want to make sure that I know the researcher as well. Right. And that they, you know, that, like when they have an issue that they feel comfortable texting me or just like walking over in my desk. And, so I think that that's actually part of running the whole thing well is like, is that the kind of like I have that rapport with them? And, in that way I can be in the information flow to understand what the issues are that they're having so I can go sell them for them. So it's like building the culture of everything. I was open, everything moves fast. Yeah. And you're personally talking with all these researchers. Yeah. I mean, I'm doing other stuff for the company too, obviously. But yeah. And but yes, I'm just trying to make sure that we move as quickly as, as, as, as we can to get the researchers what they need to do the best research in the industry. Last question. If we do achieve superintelligence, then whenever that is, what are the plans to integrate it into existing products? Yeah. I mean, I think when you have superintelligence, the nature of what a product is will change pretty fundamentally. I mean, you think about our products today, a lot of the big ones, or if you look at Instagram or Facebook or our business model around ads, they're basically already these very large transformer systems or the recommendation systems, not language models today, but they're basically already these big, massive scale machine learning systems. So, I mean, in the future. Do you think we'll, we'll get to a point where, you know, instead of having it be a recommendation system that, you know, has some kind of basic understanding of what you might care about and therefore can kind of spit out an idea for a story. And then you have the app that renders it. I just think you'll have models that you'll interact with directly that, that, that it'll, it'll recommend content. It'll generate content. You'll be able to talk to it. Right. It's like just so I think it's this stuff will be much more integrated. And then, I think that that sort of culminates in the glasses version where I think what you're going to get is eventually this always on experience that, I mean, you can control when it's on and off, but it can be always on if you want where it can, you can just let it see what you see here, where you hear it can go off and think about the context of your conversations and come back with, with more context or knowledge that it thinks you should have. When you need an app, it can just generate the UI from scratch for you. In your vision. I, I'm not sure how long it's going to take to get to that. I don't think this is like five years. I think it's going to be quicker. So, two, three. It's hard to exactly know, but, but I don't know, I would guess it's, every time I think of what a milestone would be in AI, they all seem to get achieved sooner than we think. So I think like my optimism about AI has generally only increased as time has gone on. In terms of both the timeline for achieving it and how awesome it's going to be.