Raw Otter-generated Transcript of 'What Comes' Next

Dan Forbush
Dan Forbush
Last updated 
Ellen Beal  
Gary Rivlin Is a veteran for the Prize winning journalist who shadowed the top field of AI introducing the breakthroughs and developments that will change the way we live and work. He is the author of nine previous books, including saving Main Street and Katrina after the flood. His work has appeared in The New York Times, Newsweek, Fortune GQ, and wired, among other publications. He is a two time Gerald Loeb award winner and a former reporter for the New York Times. He lives in New York with his wife, theater director Daisy Walker and two sons, so please put your hands together.

Ellen Beal  
Matt Lucas is a Professor of Business at Skidmore College, a former global executive and AI pioneer who transforms customer insight into brand growth for the world's leading consumer brands and retailers as a generative AI subject matter expert and Professor of Business and the vice chair of AI strategy at Skidmore, he is at the forefront of applying artificial intelligence to the higher education, retail and consumer landscape. He's a keynote speaker on AI and teaches courses including AI powered self and selling and sales and foundations of business with generative AI bridging Academic Innovation and real world commercial application. So when I thought we would do today, please give a round of

Ellen Beal  
applause. I don't know how many of you are football fans, but I guess I should also say and thank you for being here, because this is the first day. All right, we'll skip that.

Ellen Beal  
Then. But what I wanted to say is, so they're going to talk for about 45 minutes. You can answer questions, and then you go downstairs, where you can purchase a copy of your own copy available,

Unknown Speaker  
and Gary

Ellen Beal  
will be happy to sign it for you. So without saying anything else on stage, I'm turning over to Matt and Gary. Thank you so much for making the

Matt Lucas  
trip up from New York City, and we're excited to share special points in the group here for all of us to know real quick, raise your hand. Who's logged into chat GT or Google, Gemini or Claude last week or so. This is a pretty AI aware bunch in general, so it's good to see normally. You see this theory when you get two crowds at this point, people are pretty far in their journey. Actually, every week, 800 million consumers sign in the chat. GPT every week around the world, 800 million average weekly users. So that's twice US population logging into these tools and becoming even more part of our lifestyle. And even here in upstate New York, companies, businesses, colleges and others, have become more and more involved with bringing AI into, you know, the community and what it means. Yet we're in a challenged moment. AI has some real concerns around bias and ethics and safety and security of Gary. Your book, AI Valley, Google marketing push here for Gary, and it's downstairs to really get deep into this conversation. And it's really helpful to hear your thoughts here for the group. We want to start off with one really, really, pretty good one. The core concept of the book is that AI is finally arrived, and it's been going on for a long time. So maybe talk a bit about why it's finally happened and what happened in 2022 to say it's now here because it's been around for a long time.

Gary Rivlin  
Yeah, so it's great to be here. Saratoga Springs before. It's gorgeous. So I was surprised to learn this, that artificial intelligence dates back to the 40s and 50s. They they were very optimistic. Back then, in the mid 1950s there was a famous conference in Dartmouth, including a fellow who coined the term artificial intelligence, and at that point, they thought it would be about 10 years before businesses start using artificial intelligence. They've been off by decades, but so AI kind of started becoming part of our daily lives, like Google Translate is AI that dates back to mid 2010s auto correct Google was using it for imperfect searches to figure out what do people mean with the spellings or added context. But the difference was when chat CPT came out three years ago in November, November 30 of 2022, it wasn't behind the glass. We could talk to it, and you could kind of feel the magic of it like, wait this. Machine is talking back to me. So to me, that was the big difference. It's suddenly we can speak with it. I wasn't doing stuff invisibly in the background. Really

Matt Lucas  
helpful. I think, as you think about the storytelling, why that inflection point happens? It's usually due to a cast of characters, real people, who are really involved in this journey. And in the book, you do a tremendous job breaking out who some of these key players are that have moved AI from theory to practice, one of them Reid Hoffman. He talked a little bit about his role and how that helped inspire a bit, right?

Gary Rivlin  
So, Reid Hoffman is the co founder of LinkedIn. I call him the best connected person in Silicon Valley. He's also this amazing investor. He's one of the three initial investors in Facebook. He put in $37,500 and when the company went public, that was worth $400 million which is slightly better than my return. He stepped down from LinkedIn and became a venture capitalist. His very first investment as a VC was an Airbnb. His party is making fun of the sound like the stupidest idea in the world, like people sitting on your couch, or strangers don't want to do that. That ended up paying out for his venture firm 1,000x so for every dollar put in, you got $1,000 a great investor. And it's also he was, like, perfect for me as a storyteller in that he went to Stanford in the late 1980s and he kind of majored in artificial intelligence. It really didn't do very much then it could tell a circle from a square, a dog from a cat. But, you know, it's almost like teaching kindergarten at that point. But, you know, kind of in the mid 2010s with some guy named a dinner with this fellow you might know a lot of us and AI started getting interesting, and that's when they felt like project turned my attention back, and he was some of the initial investment in open AI, which I gave his chat, GBT. And in 2022 he co founded his latest company, his first company after LinkedIn, which was inflection AI, and it's just I got this random email, or batch email, dear friend found out as one of his 2500 friends, and just like different day, I might delete the odds of you, but just kind of lay it out that instead of us learning the symbolic language to talk to machine, the machine is going to learn our language. That's kind of when the light bulb went off, and I just great times just before chat GPT kind of upended the role I started this book.

Matt Lucas  
That's great to go to the inflection AI thing with Reed and where you went with that. It was an interesting moment, because it's a time of startups, time of startup investment in Silicon Valley and and the whole theory of Silicon Valley, and the way it's always operated, is to really advance new startup companies. And flexion was part of that, you know, part of the journey. But what's interesting now, inflection was, you know, acquired by Microsoft, and had a whole journey to do that speak a little bit about what's happening in Silicon Valley, and in the premise of the book, you really talk about how it's changing today from what it was literally three years ago, like in three years ago, the whole premise of Silicon Valley has changed dramatically.

Gary Rivlin  
So Silicon Valley has always meant to be a startup machine. It just people talk about technological innovation, that's part of it. But really what separates some of that from other places is the venture capitalists and the others there who helped create these companies. And so I started this book thinking, I want to find out what company is going to be, the next Google, what company is going to be the next Facebook. And by the time I was doing my research, I realized the next Google is going to be Google. The next Facebook, or meta is going to be meta, because this stuff is so expensive, it's such a challenge for startups. You look at inflection, they had everything going for them. They had Reid Hoffman, a multi millionaire, best connected guy at Silicon Valley, his co founder of South women. They co founded the first rate AI machine learning company in 2010 they had Bill Gates as a funder, Eric Schmidt from Google. Will I have Ashton Kutcher? They had, like, a star power thing. They had to see all the money in the world. And it really wasn't enough. Like, you know, they decided to throw in a towel and sell to Microsoft because they raised a billion and a half dollars, you know, I was like, we're gonna have to raise billions and billions and billions more, because it's so expensive to create these models, it's so expensive to hire the talent, it's so expensive to operate. Like when I started my research, it was like 510, $20 million to train one of these models. You have to use a lot of computer time. By the time I finish, it was like five, 10 billion and people predicting by 2027, is going to cost 100 billion. And startups really can't raise that kind of money, so Google, Microsoft, meta, a few other huge companies I fear, are going to really dominate that the next Google is

Matt Lucas  
going to be the next Google. It's got an imposing moment. It's a good thing about those hours coming in powerful in terms of their wealth. People. Abilities, for sure. You talk a bit about the ai bluebeams, a bit of this historical view of what we've seen with other industrial groups, you know, the beginning of the locomotive industry. And then on top of that, the bubble that burst through the, you know, for the internet, you know, there's this rise and excitement, almost irrational exuberance. And then there's a moment when something happens, there's a flashpoint that makes it reverse, and we all realize you're looking at a bubble. So we'll talk a bit about that. Now, in today's environment, what you see, what you read, what's going on with this moment? We're reading the paper all the time. Of you know, $300 billion of investment going in this next year, a new data centers and new investment in technology. $300 billion is being invested. So are we in a bubble? Are you going to rise? Are we creating new plumbing, like we did with our new highways? What are we doing

Gary Rivlin  
with this? Yes, I can make the argument that AI is both overhyped and under hyped, kind of so I covered the.com era started the mid 1990s the rise of the Internet. So I was there for the.com boom, the bus and then the revival afterwards. And I heard this thing while I was doing doing that coverage, that we tend to overestimate the short term impact of technology, but underestimate the long term impact. And I think that's playing out here. There's been absurd stuff going on, like, you know, for people, for researchers from whatever you know, Google, meta, they decide to start their own company, they write a memo, and suddenly they're having millions, 10s of millions, throwing at them, and their company is worth billions on paper with no product, just, just an idea. And so you know this, it's interesting, but most of us, if we make a bad investment, that's what we think of. Venture Capitals is sort of the opposite. It's not the bad investments they make, it's the ones they didn't make. You talk to Reid Hoffman, they'll say, Oh, I had an opportunity to get into Twitter, an opportunity to go into YouTube, and his firm, they had an opportunity to be on the first round of Instagram, and those like, multi billion dollar mistakes. And so there's this really my mindset. They just want to slap down their bet as fast as possible. And so they'll make a lot of stupid mistakes, but you make one Instagram, and you know, if they had made that, that would have been worth about two, $3 billion that they did. So that's the mindset. And so there will be a lot of buzz. There will be a correction, I think it's going to be more or less like the.com era. But one big difference is the.com era. All these companies went public. You know, hundreds and hundreds of companies went public, you know, based on their.com name and internet play. Now, none of these open AI, which makes chat, GPT, has a paper worth of $500 million ranks in the 20 most valuable companies in the world. Even though it's only 10 years old, it's not public. And so I think a lot of people lose money, but it's going to be the investors who lose money. But you know, I think long term AI is real. It really is going to have an impact on business, on education and our personal lives, to me, is going to be the same as the internet. You know, took a while until it was critical mass to have something like social media. Took a while before the payment worked out. But, you know, 1015, years after the start of the internet, you know, that's to say the mid 1990s you know, the internet had integrated itself into pretty much all aspects of life. And I think AI is going to play that. She's going to go through that

Matt Lucas  
same cycle. Let's keep building on that. So if we get a little deeper into like, how does it go that far? This idea of a trillion dollar problem. So Goldman Sachs just recently came out and said, What is the trillion dollar solve if we're putting $300 billion in, there's got to be a trillion dollar market to make it worth it. What so? So what does that look like? Or, what are the what are the elements that you think or that you heard people talk about that. Might say, this is worth a trillion dollars more.

Gary Rivlin  
Well, you could say that, you know, Microsoft, Google, Amazon, meta, they call them the hyper scales. They're the ones who are collectively spending the $300 billion and you can say, You guys are insane. You're building, you're putting all this money. There's no business model yet, you know, there's they're putting in hundreds of billions and making single digit billions. They're losing lots and lots and lots of money on AI. However, I can make the argument that if they didn't invest like they miss out on AI. And you know, companies that IBM missed out on the PC, Microsoft missed out on the internet, missed out on mobile, and that cost the company 10s of billions, hundreds of billions of dollars in value. And so I think the smart play is, I don't know what it's going to be, but I know I have to be in the middle of it, because it's. Is going to be one of these fundamental change, just like the internet, just like the mobile phone and all so But to answer your question more directly, what is it going to be? I think we don't know exactly. You know, they talk about AI agents, this idea that it's more than just something you can talk to and brainstorm with. It's more than something to just draw your picture. Make you a little funny clip. It can do stuff on your behalf. So inflection, AI, one of their ideas, every hop in company. One of their ideas is, all of us would have a personal assistant, as if we were rich person. You know, it would know our likes, and all take care of like, oh, I have to send a present to my friend Pat, because she has a birthday, and it could kind of do that stuff for you. And, you know, you can imagine in a personal context, or for business, you know, it's like AI is kind of amazing. It just sees patterns. It's much better, rather than any human being. It could ingest, you know, infinite, near infinite amount of data and see stuff. And I really do think it's going to be a critical part of business, critical part of our personal lives. How exactly we don't know that? I'm convinced, a lot of people in Silicon Valley are convinced it will be central,

Matt Lucas  
sure, certainly studying that way to make you get there. For sure, lots of people, I'm sure we all have friends and others that we that were with, that they're on a spectrum of their interest, or they got some are, some are doers like, this is the end of the world. This is not good for society, not good for humanity, not good for education. Then you have the other side. You have Zoomers. People are like, this is the best thing ever. It's going to call, you know, find a cure for cancer. All human maladies get solved, and all those things, you know, with that, in the middle, I think you're called bloomers. I think you're suggesting so let me explain the group a little bit about your thoughts where you are, and then kind of how you landed there. So I think it'd be informative to the group to see what's up.

Gary Rivlin  
Yeah. So to start with the Doomers. I mean to me, they watch a lot of read a lot of science fiction, watch a lot of movies, this idea of laser eye robots are going to subjugate humanity like, you know, AI, it's amazing, but isn't even close, you know, to doing anything on its own, like, what? Like that. I mean, there's lots of concerns. We get into some of my concerns, and I have a lot of deep concerns about it, but I think it's the wrong concerns. I'd rather pay attention to stuff that's within our life cycle, AI and warfare, you know, privacy and surveillance, you know, AI and inherited bias in it. We're using it to make critical decisions. Those are the kind of things I'm worried about. The doers are looking at something much more apocryphal, you know, on the other side, I have no patience for the Zoomers. You did a nice job laying out, but there's one aspect you looked at that in a way, they see it as a crime against humanity to do anything that stands in the way and slows down AI, because you could potentially find cures for the camp for cancer, because he's attention to all this great stuff. There should be nothing staying in our in our way, I'm much more comfortable as what they call a bloomer. I see that AI could do lots of good in education, in science and healthcare, broad range of things, if we're smart about it, because it could also do a lot of harm, you know, a powerful agent that could come up with new therapies, you know, new vaccines can also create a pathogen that could kill a lot of people, you know, AI that's good enough to write weighty toast, and that kind of stuff is very good at tricking people and being, You know, scammers. And there was a study out that, and this is AI today knows what it's five years AI today is about 60% more efficient at changing someone's mind, whether convincing them to buy something, changing their political view, because it could really understand us and know our weak points and figure out how to do it. So that's the kind of stuff that scares me. So I think, like any new technology, the television, the car, whatever, will cut through the internet. It will cut both ways. And AI is no different. There's positives and negatives. But can we steer it? Can we put it in the right guardrails? Can we do it responsibly? Can we make sure we're stressing things like safety. So, you know, it's more of a net positive than a net negative.

Matt Lucas  
Going with that net positive part of this on the negative side, you think about it, current large language models, they are known to be bias. They're trained on data. The data is our human, our human. They're from where to source and so that bias is difficult to eradicate. In some cases, we want it out. Sometimes we like that. It's in decency humanity. Things are actually a form of bias. Actually, sure, so. But so how do you think these companies should really mitigate, let's say, the more negative bias, and how do they judge that? And what does it look like? And what are the mechanisms that help us with that side of the equation, the bias and maybe some of the ethics that are tied in with these large

Gary Rivlin  
language models? Yes, I break it down into two ways. The company is paying a lot of attention to making sure that the chat bots don't act or speak in a racist, misogynist, and a semantic way, and that's not that hard. It's called reinforcement learning. Like it's trained, the models are trained on the books, the articles, whatever data it's given, but then humans get involved and rank with different aliens. That's a bad answer. Don't do that. That's a good answer. Over time, you can fine tune it so we can avoid kind of the obvious, but I worry about is the implicit bias, you know, the stuff that's baked into, you know, our bias, if it's reading our materials, those biases read and so to me, what's key is that we understand that AI is a tool. It shouldn't be in charge of anything. It shouldn't be making decisions, and so it is being used to so you apply for a job, the odds are that the first person, not person, the first entity that's reading your cover letter or and resume is AI, and I hate that. You know they're using AI to decide who should be able to rent an apartment. They're using it to pass down criminal sentences. Like, that's what scares me. It shouldn't really be a charge of anything. It's Microsoft calls their chatbot co pilot. And I love that, because that conveys what it is. It's there helping you. Just you don't put a hammer in charge of building a home. You use a hammer to build a home. And I think AI is the same thing. It's just a it's just a tool, and the people are using it to make critical decisions, and I think that just perpetuates whatever bias is in them, in

Matt Lucas  
the model. Let's keep going on that jobs, the jobs are being impacted here. And I think it's really an interesting topic. Being a college professor, I have groups of students or seniors fourth years for graduating in May, and they're looking for their next thing, whether they're going for an applicant tracking system, PTs, AI label, or whether they're trying to be upskilled in such a way that they can land their first role post college. Most of the time they would land in an analyst role, financial analyst role, or in the business context, researcher role or a presentation developer role, and those roles are getting fewer and fewer, because AI is doing more of that work quite capably so. If you think about that, talk a bit about you think about jobs going away, or times of the work that changes, and what's kind of short medium term that you see on that, from what you talk to

Gary Rivlin  
researchers, we're already seeing that it's harder for recent college graduates today than it was just a few years ago. And they don't know exactly why, but the workman theory, I think, is right, is it's largely because of AI companies are thinking, Well, do I have to hire that junior analyst or that just at a college computer coder. When these things can code, they can do basic jobs. I mean, that's what I think is scary in a job context, that these models are very good at doing basic stuff. Can I write a book? No, but it could write a press release. Can does it you know, hey, does it have 10 or 20 years of marketing so it has great insights? No, but it could do a basic market research report. And so what's happening is there's less entry level positions. And I kind of played out of my head like, wait, but there aren't the entry level positions. Well, who's going to become the middle and the senior people if we're not feeding that? But I don't have an answer for that. But what scares me about AI is that it's pretty much coming from more or less everyone. Blue collar with robots, autonomous vehicles, for people who make their living as drivers. It's coming for pink collar, you know admins, you know people who are supporting

Speaker 1  
someone else. It's coming for white

Gary Rivlin  
collar, all the knowledge jobs. And, you know, I think in the short medium term, it's not necessarily going to be aI replacing humans. I think it's going to be humans using AI being more efficient, replacing humans who don't so like the marketing department of 20 will shrink to a marketing department of 18 or 15, as people are integrating AI and using it either as kind of a Junior Assistant, or they could just be more productive. And AI, of course, is artificial intelligence, but some use it for amplified intelligence that I can do more, you can do more. People can do more using AI, but kind of, I think this is going to play out much slower than people think, but I don't doubt this would be a negative. I covered the internet here, and that was the big fear to the internet here, like, oh, it's going to destroy all these jobs. And it did, but created all these jobs. And you know, now unemployment is 4% Despite the internet, I think the same thing will happen with AI. It will create jobs that we really can't imagine yet, but my fear is going to, like, destroy a lot more jobs, and then AI is going to be a net negative, but that's going to take place over the next 510, years. One last beat like so already AI is making it harder for a voice actor to make a living. It's already making it harder for a translator to make a living. But, you know, there's four or 5 million people in this country that make do a lot of livelihood, driving, you know, long haul truckers, Uber drivers, Camp drivers and stuff. And I don't know when it's happening, but autonomous vehicles are going to replace this job. Already, there's like San Francisco, LA Phoenix and maybe Austin. There are four cities where there's autonomous caps, and they're taking, like, one and a half to 2 million people a month. And I kind of do fear that that's that's going to cost a lot of jobs, interesting

Matt Lucas  
time for sure, technology everywhere, for all kinds of roles. One of the things about the book, we go back there for a moment, is you talk a bit about this interdisciplinary, non coders that run these companies in that you have a view that probably the best people will be involved with our future has things like social humanities, psychology, sociology, you know, other elements that are part of being more human versus being technical coders. Talk a bit about that. What do you think about that? What's this idea of our future and being more more human? What does that look like? Right?

Gary Rivlin  
So there's this Silicon Valley thing I've been hearing for 30 years. Now, all we have to do is get a few smart people, usually guys, a few smart guys. Yeah, sure you want to fix your tracking system, some corporate thing. Like, sure, it doesn't mean but AI is, it's so powerful. Like, I think it's going to be in so many aspects of our life. It's not okay that it's just some computer scientists, executives, a few others in Silicon Valley figure this out. I think we just need a much more diverse group of people working on this first geographically. What if you're in the rest of the world, like it's Silicon Valley and China, and you know, those are the ones who are advancing these, these models, gender, racial, all these other kinds of diversity, and also disciplines. You know, it's like right now, it's computer scientists, mathematicians, maybe linguists and physicists are involved. But what about historical What about activists? What about philosophers and all? And you know, there is some encouraging news on that front. I am seeing, you know, at Skidmore and other colleges around the country that there are, there is this big effort to bring a much broader historians. Let's bring sociologists. Let's bring a much broader group, cross disciplinary approach to AI. You know, again, if you want to be our personal assistant, it's gonna have to know everything. About us. It's not really going to be worth very much, but it doesn't know who we are. And so there's this big trust thing. You know, we're all going to have to trust this. We have to trust big tech, which has given us a lot of reasons not to trust itself. And I really do think it's important that we have a much broader group of involvement than just some tech moguls in Silicon Valley,

Matt Lucas  
Yeah, agreed. I think there's we see it down. We see this idea of broadening out to beyond tech, to others, you know, to make sure this be part of our society going forward. And our students struggle with this that, you know, it's indoor, for example, students, they want to do both. They want to have this really be more human, be more connected, but also be successful in this future world. What are we leaning into? Where is this going from

Gary Rivlin  
here? Like, to me, I think the human element becomes all the more important because we're trusting. It's like these things are very good at certain things, but not good in these models. They're not good at other things. I think taste, you know, I use AI to help me in my work, but I'm the one like, okay, that's not good, but I could make it better, because I have a sense of what good writing is, and it does value it. I think that the more these things take over, the more the human element takes place. And, you know, just another encouraging thing. So it's 1997 when AI was able to be Casper, the best chess player on Earth. And you figure, like, okay, chess is over. You know, the machines have won, and yet, chess is bigger than ever. There was that TV series on a few years ago. I had two sons who were high schoolers. They're always on their phone playing chats, and it's really popular. A lot of their friends are doing it. And so that gives me encouragement, just because this thing can create user could create art, you know, I think it's gonna be that much more important that this is human, or it kind of reminds me of like we have digital music, and suddenly albums became much more. Important that they're just so thick and I'm, I guess, the optimist in me, you know, hopes that humans are still always going to be at the center, even if AI is handling more and more more and more in our lives. That's great. So

Matt Lucas  
optimistically, yes, human, humans are even more important than ever before. Good for us. That doesn't work in theory. You a few theory it's important to know. And obviously when you exchange it with each other, is so critical on this. And you mentioned a bit about this overestimate the short term impact, this fear of AI, the fear that's going on that's replacing work and all those things, and then this underestimate the long term. So can you do a little bit more on that. When you say, underestimate the long term, what point of reference are you coming from? So that are as a group, we can say, are we really panicking too much now and not really paying enough attention further out? Or do you have a thought there that would help define a

Gary Rivlin  
bit for us? I mean, so is the limits of the human imagination. Like I covered tracking from 1995 to 2000 like, I didn't anticipate anything like social media and the role like a Facebook was going to have and dividing us and playing on Hate. Like, lots of stuff will happen that we can't imagine. There's also kind of this human thing that we could imagine. Things getting bigger, but we can't think exponentially. These models are getting exponentially better. They're getting 10 times better every year or two, and what they're capable of, it's really hard to imagine. So to me, I'm just convinced again, I'm using the engine. I'm using the railroad. Okay, the railroad comes along with the car. Is a better example, the car came along, and who can imagine, like, okay, gave rise to motels. It gave rise to the suburbs and the experts and all that kind of stuff. That's the kind of stuff I mean, like, I know it's gonna have a big impact. I just me, and most people don't really have a big enough brain or creative enough brain to figure out exactly what it's what it's going to be. But you know already, I'm running a story right now, right now, where there's this company, and they're talking about their AI chat bots as their co workers. They actually, it's funny. They say they're so, like, you know the intern they went to advancing College, like they know any they know everything, but they really don't understand anything. That's what's amazing about these models. They have a PhD level knowledge across an amazing number of disciplines and sub specialties, and all but five year olds and 10 year olds understand a lot of things that these models don't. That's what I mean. Like they're so limited there's one of the one of the giants of this field, the three godfathers of machine learning, the ones who gave us the large language models and neural networks that gave rise to the chatbots and the image creation, and ones that do text to video. And you know, he laughs at people who imagine it taking over like, he says that they're done as a cat. The example he uses like, it takes like, 15 or 20 hours for a teenager to figure out how to drive. They've been trying to figure out how to teach a car to drive on its own, a computer to drive a car, essentially since the 1970s you know, it's just like, there's stuff we as humans know. Essentially, they say no, again, that shouldn't be in charge of anything. It should be autonomous. Ai, it should be there helping you do what you want to do.

Matt Lucas  
Well, good to ask one more question, then we'll take some from the audience. Got one getting ready to go. So if you think about the adjacent topics, AI, that are out there today, not to tip your hand on that story, but there are many things in this, in this book, and the characters and the stories that are that we share that are really important for everybody to know how this has begun and where it is today. So really worth, you know, understanding what's in the book. So, so well done. But there's some adjacent things that are going on with AI today that many of them that are worth saying more about. So can you speak a little bit about what else is out there that kind of twirling in your head about things that we should be we should be talking more about overall.

Gary Rivlin  
Well, there's AI and education. You look at colleges, universities, I haven't figured this out yet. There's two elements. They're like, okay, of course. We don't want these things to be writing. We don't want students using them to write their term papers, how they're going to know how to think reason, you know, kind of create a basic memo report, you know, whatever. No. On the other hand, I think it's essential that students, young people who are entering this job job market know how to use this and so it's essential that you're teaching folks how to use it and use it wisely, understand what it's strong at and when it's weak at. So universities, you know, we were talking before this again, they're starting to have a conversation, but mainly there's a little bit of panic right now. They need to figure it out. Climate change. You know, it's like we talked about, can humanity survive? Ai, convinced it. Can we figure out nuclear bombs? We'll figure out this. But, you know, can the earth survive? Ai, it's just like these things are such power Hawks. When you mentioned before the $300 billion commitment. Basically what you're saying is to spend all this money creating data centers, these vast centers filled with computers, computer chips, they're operating 24 hours a day, seven days a week. That's where they're spending money. And we don't have enough electricity to handle everything that's on the planning boards right now. So Trump's second day in office, enabs project, Stargate. This is $500 billion worth of data centers that's going to be our leg up, our competitive advantage over China. That's like this. You need the power, the energy that would power seven and a half million homes just for that 500 billion. There's kind of a couple of trillion dollars worth of these on the on the drawing boards right now, and it'll just rapid system. Even the utility rates are starting to go up because the data centers are starting in local areas, Virginia, Ohio started, you know, have a bit of a crisis where they're going up significantly, like 15, 20% which is a big hike for a year. For utilities across the board, utilities have been pretty flat the last 1020, years, but because of the electrification of, you know, electric vehicles, crypto mining and now the biggest one AI, is really putting this huge strain on the grid and a huge training, very

Matt Lucas  
important to hear these conversations going forward for us as an audience. So we're going to open some questions for the group. So happy to do that

Speaker 2  
kind of academia work like I was looking I sent, actually

Unknown Speaker  
sent out a

Speaker 2  
message, and I probably got an autonomous reply. If you wanted to have access to some of these databases, and you weren't a large university with $100 million you're not going to get it. So it seems like access to databases and infrastructure that to put the they want the trillion dollar reward of the return on their investment to get that. But it's like you said this, how many of the smaller number of the population are patrolling the user database selection of what's being provided? It seems like a product. It's like if I

Speaker 1  
pay you to get

Speaker 2  
the most advanced version of the AI. It's like the old combo. I mean, I'm old, I have gray hair. Peter falls like, well, if I give you $1,000 would you remember what better than the 10? And they're like, yeah, that'd be 1000 times better answer. So he catch the criminal. And so does that play into

Gary Rivlin  
does that have enough to run with? Yeah, so, I mean, there's a couple of things like, you know, before we were talking about AI threatening, kind of the Silicon Valley startup machine, two kids in the dorm don't have access to the data that a Google would. So how are they going to create one of these large language models? Because they don't have access to the data. One thing you said, what's amazing right now is for free, you can use the most advanced, large language models out there, chat, GBT, Gemini from Google, copilot from Microsoft. You could use them for free, but you're limited. You know, it's like, you can ask a few questions and then you get to come on like, well, if you want to keep talking to us, pay us pays $20 a month. And, you know, it's not 20 bucks a month. You know, it doesn't exactly break the bank, but, you know, it's funny, open AI and chat. GBT is creator. They have this problem. They have the premium plan, or, you know, a business or someone, every user pays $200 a month. They're still losing a lot of money on that $200 a month, but it's so expensive to give an answer that they're losing money. I mean, open AI is an incredible company. It's 10 years old. It's one of the 20 largest companies on Earth. It made $0 in revenue a few years ago break. It's on the track to bring about $12 billion dollars in 2025 but it's still losing billions and billions of dollars, and also they don't know how they're going to have a revenue. This really reminds me of the internet. You're like, they have 800 million people a week using it, but they have no idea how they're ever going to really make money on this thing, my fear is that they're going to figure out how to make money, like they're going to let corporations manipulate it like, Oh, if we pay money, it's going to suggest our it's going to put forward our point of view. It's going to suggest our product. This is a more sophisticated version of product placement. Is not an obvious ad, but there's bias in it to. Words, Dan, and you know that that's really sneaking on your head, that would really worry me. But how are they going to make money on this stuff? Speak to

Speaker 3  
that a little bit we know the internet was designed public financing, right? It was only commercialized in 1995 so it's funny, because the PCs are now experienced what we've been doing for decades and decades, plowing money in without a return. Just my comment on this is first. I think when we talk about AI, we really should talk about the people behind it, because AI is this and it's not. It's driven to excel. But I can't because someone being your personal assistant and knowing your details and all that you don't, you don't mean this amorphous thing. You mean him, right? This guy is going to know yourself, who has, you know, private model, which is just Danny. I'm not I'm not kidding. Let's talk about what this really is, because the problem is, how do we keep oversight

Speaker 4  
of these people? My question to you is, do you think AI rolled out

Speaker 3  
differently, developed differently? If they didn't, it's my turn, free run if, let's say, when they were doing learning language models, not like a public thing that would just be a service, but because they intended to make money on it, they have to, in advance, come up with a model and contracts with especially copywritten material, in advance, over the next 10 years, we want to use your data. Here's the money, right? How would they do that? Now, the data centers, they are killing small communities that they're already built in it there's water usage, the rates are going up. Why on earth are these things being built? They should have all the contracts in place. They should be paying premium. They should be helping to develop the electricity they're going to sell that. So my problem is, and I want to know, if you think about this, we just seem to accept

Speaker 2  
it because it's AI.

Speaker 3  
Well, first of all, see, these are a bunch of people that are selling a product based on technology that was publicly developed. Now you're rolling over again, ahead of the law, which we've become. So how do you think would AI be what it is now

Speaker 5  
if they had to

Speaker 3  
actually roll it out in a legal and ethical way before this says there's a problem. With this. We're democracy.

Gary Rivlin  
We should have a say. That was very well put. You're absolutely, you're absolutely right that, you know, fundamental, you know, the fundamental research of AI is government financed, you know, it was built on research grants or, you know, and it was universities that developed it, and people figured out how to get rich off I always joke like it's not the people who created the technology who get famous. It's able to figure out how to get rich. It's Tim Berners Lee who created the internet, but everyone knows Mark Andreessen who created Netscape and got fabulously wealthy from it. So we do have a bias. The second point I love you making is, I call it the original sin. So these things are

Speaker 1  
trained on the

Gary Rivlin  
intellectual property created by writers and artists and musicians without any pay. In fact, it's it's interesting to me, AI right now is very fragile. There's these lawsuits going on that it looks like the courts are going to rule in favor, unfortunately, of the creators, excuse me, the companies, not the creators. But it can be tipped over. There's all these lawsuits over a wide range of issues exactly because of what you're talking about. The great irony, and you brought up Sam Altman and open AI. Open AI was created in 2015 as nonprofit. The reason it was created is because Sam Altman, Reid Hoffman, Elon Musk, other people, Peter Thiel, didn't trust that AI will be in the hands of a Google and a Microsoft. And so they said, like, look, what if we develop this as if profits didn't matter? Let's just create this thing for humanity. But it's a funny

Speaker 2  
thing when

Gary Rivlin  
suddenly there's billions, trillions to be made, because I would argue that open AI is worse than any of them. Trust and Safety was at the center of open. Ai, all these people have left open. Ai, because they've said trust and safety under Sam Altman has been tossed overboard because I think it's an arms race, and the company that has the chatbot that wins is potentially a trillion dollar, you know, opportunity. Sam Altman, I love that you also brought up we were talking about this before, like it's the humans who create this thing. It's the humans who matter. Sam Altman, He's charming. He's a brilliant guy. I don't trust him. He's squirrely. He's not honest with folks. He was cheap out of his own company as CEO, he's brought back because, you know, they realize, like, Wait, we wouldn't make a lot. Money, because he would be duplicitous with the various board members telling one one thing and telling another, another thing and all, I just don't trust him. And again, we're talking about artificial intelligence, like she did a study last year that the majority of Americans are mistrustful. They're fearful of AI. What that means is, forget what the Zoomers say, like, we need to be doing some handling. We really need to stress, like, hey, you know, we're paying attention to trust and safety. We're concerned too. Again, that was the opportunity of open AI. They were our friends. They were going to be our advocates. But you know, is that capitalism, Professor, is that just the way

Matt Lucas  
it works, it shouldn't have to be should be. The tricky part is the the enforcement mechanisms of this on those people that are making these decisions have become now market pressures. So if you're if your systems chat to be. Had this letter this past summer was too nice to you. You know, it came off the answers that were more friendly, helpful, synchronous, yeah, it really made me feel like you're the best person on the planet. They did that super fancy to make it nicer, so that you would play with it more and give it more of your data. But what's interesting is that, because society stepped back and said, This is too nice, it's fake. We don't want it. They realize that they were in trouble, and it changed their model back because us, the collective we were paying attention, wrote about it and said this sycophancy is not good for us. We want truth. We don't want to be below the hot air. And so market pressures from us put them back in the box now to get better, more clearer answers, not perfect, but better than when they flip the other switch. So we're in this moment where, unfortunately, so much of what's happening has to be done with press. Has to be done with people standing up and saying, This isn't right, and have our voice heard. That's hard to do as individuals, but we all have a role to pay attention to it and obviously make it part of our community discussions and others so we get to a better place. AI is not a genius going to go back into the bottle, but we can shape it, and we have to shape it. That's a tricky thing, but I really believe that it starts with all of us in our communities, our discussions, our individual use and then working with the system that I trust, Claude, for example, the competing product at chat GBT is purported to have higher safety and security standards. It is built by people who believe that. But we have to decide on our own. We're kind of self policing ourselves on these capabilities. So I mean,

Gary Rivlin  
one thing I'm scared of, and we've seen this play out over and over and over again, that people express their concerns, and there's policy debates and all this, but isn't until something really bad happens that people wake up. And so, you know, a pathogen is out there that kills a lot of people, then we're going to take AI seriously. You know, I choose another one randomly, like, if it siphons off a trillion dollars from the world monetary system before a human being can even link, then we're going to pay attention. And, you know, that's why I think the Zoomers, which is basic philosophy and Silicon Valley, that's why they're making a mistake. They're getting way ahead of the public getting I'm pretty convinced something bad. I don't know what it's gonna be, but something bad is gonna happen, and people gonna start to look at it on social media, with congressional hearings and all

Matt Lucas  
of that kind of stuff. Interesting. Time for one more question. One up front, here on the right. Okay, okay.

Speaker 6  
Thank you so much for joining us. You mentioned that the venture capitalists seem more concerned about the missed opportunities and so forth. And I think that kind of goes to the whole problem. I see ethical dilemma is, there aren't many venture capitalists out there saying, How can I make the world a better place? I missed that opportunity. It's always, always about money. You know, it's the people you know. It's like, when we measure what has apple done? It's like, well, they've become this hugely profitable Corporation. It's not, what has the iPhone done for humanity or anything else, good, bad, whatever you can argue those. I think the technology is wonderful. I just think that the amount of money and the drive for profit and everything behind it is just so extreme. And so if you could speak a little bit about to that, I guess it's not a well formed question.

Gary Rivlin  
It is an interesting topic, like, so I've been hearing forever so about like, you know, we want to make the world better. And, you know, so I think all of open, AI, you know, Facebook, point of this out of like. Oh, we just want to kind of create community so you can share your photos and your stories with your loved ones and your friends, you know, kind of thing. And it is a term in shit vacation. Actually, I think Merriam Webster made it this morning, and it's this

Speaker 5  
idea, and you can see with Google, you can see with Facebook meta. You can see in all these

Gary Rivlin  
companies that they take something that sounds really good, like Google will give you access to all the world's knowledge, but then they realize, Oh, we're publicly traded. We have to increase our revenue every year. And so they keep on changing the algorithm. They keep on changing the page, usually a food page, but now you can buy you can pay money to come on the top of the search. And so what happens is, you search Hyatt and you get 10 different hotel chains before it because Sheridan, I'm making that up. I don't know which buys which, but all these rival chains buy the keyword kind of thing. You know, they have all these pop ups, like, there's all these content creators that create content for no other reason but to get at the top of Google. And so it's not the New York Times or CNN or, you know, the Wall Street Journal that's at the top is these companies that figure out these content creators figure out how to gain the algorithm. So like you see over and over again, what starts off as arguably this noble thing turns into something else. And again, to me, the classic is open AI, and you're going to do exactly what you asked for, exactly what you wish it was, and it turns into its opposite. And I don't know the solution to that. We're a capitalist country, and they want to maximize shareholder value. People want to get rich, whether they're buying the stock in it, they're investing as a venture capitalist, or they're the founders of the company

Matt Lucas  
up front here. Thank you for

Speaker 7  
this discussion. This discussion. And going further from the last couple of items that came up, invented cars, and then, all of a sudden, there's some regulations that we can only go this fast on this kind of road television. Early television, there was some censorship. There were there were regulations that you didn't only show this kind of content this time in the evening, and you weren't allowed to say certain things, if you could just magically make regulations appear and, you know, be enforced to Are there any regulations that you hope you'll see happen?

Gary Rivlin  
It's interesting. So the Biden administration did create through executive order, so it's immediately undone. Trump took over, where they said, if you're creating one of these large models, you have to do a few things. You have to share with you have to work first, what you got to do is, as a red teaming, you have to hire outsiders to try to test the vulnerabilities, and you need to share that research with the government. And we're not going to say you can't get out, but you just have to be transparent. But I thought that made a lot of sense that California just passed a lot of I don't think Gavin Newsom has signed it into law yet. On the one hand, it's a good idea. The other is a lot of tech billionaires who are among his best funders. I think he president. So I don't know how it's going to go, but it's exactly those kinds of things, just basic provisions, like, Okay, if you're going to put this thing out there, you need to test it. You have to go through these basic steps. Kind of more broadly. It's less that I have an idea more than I wish we were having this discussion. Should we be using AI and warfare? Warfare? How should it be used? I wish the International, international community, was talking about that, you know, AI and surveillance is scary. What should be our rules about using AI for surveillance? What should be our rules again, the Biden division put into place some rules. They were, Trump won, but you know, what should we be doing about using AI to make decisions like Job, you know, hiring this person or that person. So I wish we were having discussion. I mean, the good news is, this stuff is kind of fun to play with, and it can do really good things, but it's not nearly as powerful as it's going to get. And I'm scared we're frittering away this time where we're having the public discussion around jobs, around surveillance, around a whole range of issues, because I do think there needs to be some guardrails. But one last point you brought up the railroad before. So the railroad in the 19th century was killing a lot of people, and so the government came in, and one of the low standards the railroad companies fought it, but it ended up being good. Say, like, we have to have switching rules and standard size of tracks and all this kind of stuff. And what it did was give the public confidence that they could ride the rail safely. And I think the same thing needs to happen here, like folks in silver valley the open AIS and Googles of the world, like, I know you're against regulation, but regulation could actually help advance this. But if people felt safer about it, if we were, like, being reassured, like, okay, we're taking these steps to make sure it's safe, I think we'd all feel better about them.

Matt Lucas  
Applause for Gary, for being here. Thank you