NBA All-Star Technology Summit

Friday, February 16, 2024

Indianapolis, Indiana, USA

Stephanie Ruhle

Host, MSNBC's The 11th Hour, NBC News Senior Business Analyst

Deb Cupp

President, Microsoft Americas

Brad Lightcap

Chief Operating Officer, OpenAI

Stephen Pagliuca

Chairman, Chief Executive Officer, Founder, PagsGroup; Co-Owner, Managing Partner, Boston Celtics; Co-Owner, Co-Chair, Atalanta Bergamasca Calcio

Vivek Ranadivé

Owner & Chairman, Sacramento Kings

John Stankey

Chief Executive Officer, AT&T


THE AI REVOLUTION: HOW GAME-CHANGING TECH WILL RESHAPE THE FUTURE

AHMAD RASHAD: Welcome back. We just heard about the big impact that AI is having on NBA and life. Now we're going to hear more about what this impact means for all of us.

Here to lead the discussion is MSNBC's Stephanie Ruhle. Stephanie?

STEPHANIE RUHLE: Thank you so much. Thank you so much for having me. AI is the topic du jour. Generative AI is changing almost every aspect of our daily lives already. And if it hasn't, it will.

But our conversation is really about the intersection of AI and sport, right? Sport is the ultimate human experience. So where are the two going to move in the future?

We have the absolute best panel for you. Brad Lightcap joins us. He is the COO of OpenAI.

(Applause.)

Vivek Ranadivé, owner of the Sacramento Kings.

(Applause.)

John Stankey, CEO of AT&T.

(Applause.)

Deb Cupp, president of Microsoft Americas.

(Applause.)

And Steve Pagliuca, who is the boss of Boston and co-owner of the Boston Celtics.

And, Steve, I turn to you first. This panel is your jam. You play basketball, you own a basketball team, you invest in AI. All of these topics are exactly what you're focused on. Where do you see AI as the most important AI application in the NBA right now?

STEPHEN PAGLIUCA: Well, I think you really have to step back -- thank you, Stephanie -- and go back 20 years when we purchased the team.

We were actually already using -- it used to be called machine learning, as Vivek knows as a tech entrepreneur. So we were already -- we already started using. Day one, when we came in, we hired Daryl Morey from MIT, Mike Zarren, and we've built an entire staff that's built -- it started out machine learning and regression models, but now it's growing exponentially.

And in 2010 we were the first arena to put cameras in everywhere in the arena so we can track players' movements and we can track 20 body parts and 60 times per second. And now we're feeding that data into proprietary models. We're on our 20th model ourselves in terms of how do you play as a team, all those kinds of things.

So it's already happening in the NBA. It's already making a big difference in many of the clubs. And I think we're in the very early stages of it. And as the last panel talked about, you know, the sky is the limit. It's going to affect everything in the next 10 to 20 years, from health, conditioning, interactions with fans. It's going to affect everything.

And I don't think it is a fad. Although, you know, I think the conference two years ago was all about Bitcoin, and we don't talk about that so much anymore.

STEPHANIE RUHLE: Definitely not.

STEPHEN PAGLIUCA: But I think it's all -- it's all -- it's not a fad. It's for real. But you have to have definitional -- AI is not sentient yet. It's not a Skynet that can think for itself. It's a large language model based on probabilities, and it's only become so powerful because of the transformer architecture and because computing has gotten so cheap right now, you can spend those GPUs.

ChatGPT, for example -- I don't think people realize this -- this could bring down the entire power grid of the United States. It cost 4 or $500 million to load, I think the latest model is.

Is that correct, Brad? Brad's another proud Duke graduate, by the way. 4 or 500 million?

BRAD LIGHTCAP: Can't confirm or deny.

(Laughter.)

STEPHEN PAGLIUCA: 4 or 500 million to load the model. So if you think about everybody building their own foundational model or building domain-specific models, we're going to have to have better, faster, cheaper ways to do that. And I'm involved in investing in some companies that probably have that possibility coming out of MIT.

STEPHANIE RUHLE: Vivek, what do you think?

VIVEK RANADIVÉ: Well, first of all, I -- you know, I don't even know why I'm on this panel because you got, like, an ex-monop- --

STEPHANIE RUHLE: It was your jacket.

He wasn't even going to be on the panel. And I saw that jacket, and I'm like, Get out here.

VIVEK RANADIVÉ: You should just call this the monopolist panel. You got an ex-monopolist, you got a sitting monopolist, you got a future monopolist, and then this guy is like the boss of bosses. So what am I doing here? You know?

STEPHANIE RUHLE: It was the jacket.

VIVEK RANADIVÉ: Yeah. So, yeah, no, AI is like -- there's no area where it won't have an impact. Obviously, we've been using predictive AI for years. And with generative AI, it just takes it to a whole new level. So congratulations on what you guys have done. And for player evaluation, for how fans consume the content.

But even for something like we have the world's best arena. But you could go and say, hey, look at all the best arenas in the world and design me an arena. And what it can come back with is pretty amazing. So there isn't a single area where this won't have an impact.

I used to be on the competition committee, and we are constantly -- we were looking at ways that even the reffing could be impacted. If you go to the US Open now, they don't have any lines people. A Hawk-Eye does everything.

So the possibilities are endless. And I think we've reached that exponential tipping point where you're going to see incredible use cases.

STEPHANIE RUHLE: Deb, when you look at Microsoft's sort of strategic growth plan in the years to come, how much is AI a part of that?

DEB CUPP: It's a huge part of it. I mean, I think, you know, we've been in this business for a long time. AI has been around for a long time. The big shift just recently here is with gen AI.

We build AI into our entire product stack. We will continue to do that. We are very focused on making sure that we create products that embed this capability so customers can benefit from it from the start.

And we'll continue to look at what we can do as we go forward. So whether that's via partnerships or things we build ourselves, we're super interested in making sure that we bring the best tech to life, and this is a huge part of that. We really believe in it from the standpoint of not a fad, it's not a fad.

STEPHANIE RUHLE: Though they did say that about Bitcoin here three years ago.

DEB CUPP: We didn't.

STEPHANIE RUHLE: John, right now, how is AT&T best utilizing AI, and where do you see it going forward?

JOHN STANKEY: Well, I see, relative to what we do inside of our business, obviously, a large language model that has a lot of proprietary data that can go with, it becomes a more powerful tool.

In our case, our business collects an awful lot of data day in and day out, events that occur on a network, how people move around. We have a lot of information about what customers do in a particular time.

So starting to integrate that into the technology that allows us to operate our business better is a big deal.

And so there's some really exciting things we're starting to do about how we dynamically manage our network. We start to think about things that we can do about dynamically setting pricing within models and how employees interact with information that we have to bring in to be able to say at a moment's notice in a particular building, in a particular location, what kind of pricing is appropriate in this situation to be competitive. Because it's an incredibly competitive industry. There's no monopolist in telecom anymore.

VIVEK RANADIVÉ: So you just want to, like, squeeze people when you can.

(Laughter.)

JOHN STANKEY: No, we put the best value that we can out there at the particular moment.

But then you heard Steve mentioned earlier about what the in-arena experience is going to be. And there's a tremendous amount of workloads that are ultimately going to be generated within a viewing experience inside an arena, and you start to think about today's -- yesterday's announcement on integrating video into AI. That's going to drive workloads that is really at the heart of our business.

STEPHANIE RUHLE: All right, Brad, tie this all together for us, right? You got AI, media, and sport. How do you see generative AI impacting this intersection?

BRAD LIGHTCAP: Yeah, well, it's -- you know, it's funny, we talk about this internally, it's like the last kind of province of the thing that we are not quite sure yet how it's going to work because at its core --

STEPHANIE RUHLE: That makes us feel very safe.

BRAD LIGHTCAP: Yeah. The experience is such a fundamentally live and kind of live-derived experience. Everything derives from what happens on the court or on the field.

And so I think, though, that what we'll see fundamentally is -- we've done an amazing job in the last 10 years or so of starting to really think about sport as something that creates data. And the more you can create data, the more you can capture that data. The more information you have, the more you can build experiences that are unique to fans, unique to players, unique to teams.

And so we announced video yesterday as a good example of the types of things that you now can do: If you were to take enough data from gameplay, how do you use that to inform a new fan experience? How do you use it to inform even how teams think about their own play?

STEPHANIE RUHLE: Can you give us a little more detail on that? Because that video announcement was just yesterday; so many people haven't even heard about it.

BRAD LIGHTCAP: Yeah. We announced yesterday a text-to-video model called Sora. So it can take a text prompt, you can ask for virtually anything you can think of, and it generates real high-fidelity 60-second video clips that are specific to the prompt.

And we like to think that the quality level is high in most cases. Sometimes it's not. We have some work to do. But it's really a new bar in video modeling. And this has been an area that's been -- really actually kind of lags text and code and images in other areas. So it's very exciting for us.

VIVEK RANADIVÉ: He's just trying to replace you. You realize that, right?

STEPHANIE RUHLE: I'm well aware, yes.

BRAD LIGHTCAP: Never would.

STEPHANIE RUHLE: You're very negative today, Vivek. Just so you know.

(Laughter.)

This is a positive day. I'm just throwing this out there.

VIVEK RANADIVÉ: No, here's the good news.

STEPHANIE RUHLE: Give it.

VIVEK RANADIVÉ: Okay. Well, all these people, they're -- like ChatGPT and all these guys, they're just going to suck the soul out of everything. But --

(Laughter.)

But here's the good news. The good news is --

STEPHANIE RUHLE: Brad, do you want to move to this end? You're welcome to, yes.

VIVEK RANADIVÉ: Here's the good news. You can still watch NBA basketball.

STEPHANIE RUHLE: There you go.

VIVEK RANADIVÉ: That's the good news.

STEPHANIE RUHLE: That is the good news.

VIVEK RANADIVÉ: Those values will just keep going up.

STEPHANIE RUHLE: All right.

VIVEK RANADIVÉ: Because what else is there to do?

STEPHEN PAGLIUCA: You clearly showed up for the wrong panel, Vivek.

DEB CUPP: I was going to say.

STEPHEN PAGLIUCA: You were in the last one.

STEPHANIE RUHLE: Deb, you work with all sorts of customers and partners who AI is not in their company, their business DNA, and you help them figure out their AI strategy and implementation.

Help us understand this process. Because there's all sorts of people in this audience who run many different businesses, and they're thinking: AI is for these guys, but not me.

DEB CUPP: Yeah, it's a super question. AI is for everyone. I think one of the coolest things about generative AI is its natural prompts, which means you use natural language to access AI.

So this, to me, is the best democratization of tech that we've ever seen. It's the ability for people to get access to technology that couldn't before in meaningful ways.

So I think, one, if I would say how do you go do, you got to get educated. So, and you can do that in many different ways. Microsoft has done work. Obviously we own LinkedIn. We partnered with LinkedIn. We have commitments to educate a million people. You can go on LinkedIn and get free classes.

So there's lots of things you can go learn about. And I would encourage people to go do that. I think it's good to get your hands on it, to have a good understanding of what you're actually talking about.

Two, you have to have a strategy. Where do you think it matters to you? And do it where it matters. Don't do it where it's just shiny and interesting. You have to figure out what business process you think can be impacted.

So you think about it from the context of where can I save money in an enterprise and where can I grow revenue or do something I couldn't do before?

So when you have a better clarity of what the tech can do for you, then you start to think I can grow revenue in certain areas. And we have great examples of that happening all over this industry.

And then ultimately where can you save money, like things in call centers. And I know John and his team are doing some work there. So there's lots of opportunity to engage with the technology.

The other thing I would say is you have to have a governance and a safety plan. You need to understand how you want to govern AI. You need to understand how -- what models you want available to the people that work for you. You want to understand what guardrails you want to provide. And you need to have an ethics plan.

Which is similar to a lot of things what you do today in your companies. Just don't do it separately. Don't do it as part of tech. Do it as part of your company when you think about governance and AI and ethics. It's got to be deeply important to the culture of your organization so you start off on the right foot and you have the opportunity to be successful with the organization.

So I just -- always be learning is what I would say. I mean, we're learning every day. The tech changes rapidly. I actually think that it is -- we think about technology as co-pilots in AI. That's how we think about it at Microsoft.

I think it unlocks incredible personal and individual creativity, actually. I don't think it's about getting -- look, you will have benefits in areas where you're trying to reduce expense, but you will also be able to unlock creativity because people will be able to experience things they couldn't before.

And I think that's pretty cool. So, yes, there'll be productivity enhancements, but I also think there's this awesome opportunity to bring creativity to life. And in industries like this, you saw this morning what the NBA is doing, just awesome, awesome capability to bring experiences to fans they could have never had even just last year.

STEPHANIE RUHLE: John, ethics means different things to different people. So let's talk about regulation and policies when you think about what guardrails need to be on and around AI so people feel comfortable in their daily lives. Speaking as someone who collects data on millions of people, whether we know it, all day.

JOHN STANKEY: I'm the regulatory guy up here, huh? The -- look, there are -- I don't think this is a fad. I think this is a seminal change in technology. It's going to be every bit as big or bigger than the dawn of the Internet. That's my personal belief. And I think it can have incredibly powerful, positive impacts on how businesses run, society, creativity.

But with any new technology at its founding, there's always the upside to it, and there's the downside to it, all the way back to the printing press.

And this will be no different. There will be -- there will be just as powerful, negative, problematic things. And I do think it's going to require some degree of thoughtful regulation.

I think we are in a very different moment right now. We have companies and entities that will own compute power and data that will probably make them more powerful and more significant than nation-states. And as a result of that, I think you have to step back and understand what's a framework to ensure that we still have a functioning and civil society.

I think we need to have framework regulation. I don't think you want to have regulation that goes in and tries to do things at a unique and special level. I worry about things in the United States right now because our ability to put thoughtful policy in place is impaired at the moment.

As a result of that, I suspect you may see other regions of the world move out faster, and then the U.S. will probably be following based on what Europe may do or other entities ultimately put in place.

And that's a problem for a country that has typically led on tech and led on investment if ultimately the regulatory frameworks that are put in place aren't as thoughtful as the innovation itself. And it's going to be something for us to all watch moving forward here.

STEPHANIE RUHLE: What do you think, Steve? Should we, can we regulate AI, especially from a global perspective?

STEPHEN PAGLIUCA: Well, it's a hugely important issue. I would say that AI probably started in the 4th century. There's a library in Constantinople, had 100,000 books, more than anywhere in the world. And you know what happened to that library, Stephanie?

STEPHANIE RUHLE: No.

STEPHEN PAGLIUCA: Burned down by the Byzantines.

So we don't want to burn down our own, you know, AI getting ahead by regulation. But I do think it's very important.

All these tools -- I view them as tools, and all these tools are dangerous. Nuclear weapons are dangerous. So the question is if you go too far, you're going to stop innovation, you're going to stop us going forward.

So you really have to have a regulatory structure that's going to stop bad actors. So you're going to need security. You're going to need vigilance. And you've got to stop those bad actors.

STEPHANIE RUHLE: Who are bad actors, though? Who makes that decision?

STEPHEN PAGLIUCA: Well, there'll be people that are trying to use the technology for -- to steal, to cause wars, to cause social issues and problems. And so there's probably going to have to be a whole new regulatory body established to be vigilant out there to see what's fake, what's not fake.

If they start trying to make money by putting a fake report out on a company, for example, with video, that can be done today from your video system, and showing that a company is burning down so -- and they short the stock.

So you're going to have to have a whole new approach to this. But it can't be one where you throw the baby out with the bath water as they did -- the burning of that library in Constantinople destroyed all the great scholars' books. It set humanity back by, they say, 1,000 or 2,000 years. We don't want to do that either.

So it's going to have to be a balancing act. And when you think about it, we're really in the first innings of this.

How many neurons do you think your brain has?

STEPHANIE RUHLE: Vivek?

VIVEK RANADIVÉ: Trillion.

(Laughter.)

A trillion.

PANELIST: A hundred trillion.

STEPHEN PAGLIUCA: 86 billion. 86 billion neurons in the human brain. A worm's brain has 306 neurons. And part of the issue on artificial intelligence is we got to be somewhere between a worm's brain and a human brain, and we're not nearly near the human brain yet.

The new systems coming out, one of them I'm investing in, has been -- changed the way you're doing AI so that you have much less neural networks. They can drive a car with 19 neurons, versus today is 200,000 neurons to drive a car. So we're going to have to have that power usage come down. We're going to have to really refine the technology.

Today I would say artificial intelligence is used much more in automating tasks that used to be done by people. So call centers, marketing. So automating tasks. We're not there where we can trust it to make a medical diagnosis yet because you have hallucinations, you have explainability, all these issues.

It's a black box. ChatGPT is still a black box in terms of if something comes wrong out of there, it's hard to trace where it came back from because it has so many parameters and so many neural networks in it.

VIVEK RANADIVÉ: Well, I would actually say, though, Steve, that in many ways, you know, and I was involved with creating predictive AI, that one of the beauties of generative AI is that you can do introspection. And I think that takes it to a whole different level.

STEPHEN PAGLIUCA: I think they're getting there, but it's not -- there's still -- there's still hallucinations.

And I'd also say, Vivek, there's going to be -- I don't think it's going to be like Google or it's going to be somebody, Microsoft, dominates AI. There's going to be many foundational models. There's going to be domain-based, expertise-based --

VIVEK RANADIVÉ: See, she's shaking her -- she's, like, agreeing with you.

DEB CUPP: He's absolutely right.

VIVEK RANADIVÉ: You see that?

DEB CUPP: Yeah.

VIVEK RANADIVÉ: And this guy just -- this guy just said watch out because somebody like her could be a monopoly --

JOHN STANKEY: I didn't say that.

(Laughter.)

VIVEK RANADIVÉ: You heard him say that.

DEB CUPP: There already are many foundational models.

VIVEK RANADIVÉ: And by the way, you know how many -- like, guess how many regulations have been passed in January? Like, there's over 400 bills around the country that are being passed right now. And, of course, they benefit the monopolists because it stifles the young company. So we got to be careful about regulation because it's a balancing act.

STEPHEN PAGLIUCA: Yeah, I don't know if I agree with that because these large companies actually are advancing the state-of-the-art. They're putting billions of dollars into it, much more than even venture capital could do.

So I think there are a lot of positives, as long as a level playing field. And I think they'll be surprised themselves because there's disruptive technologies out of there. Like, you know, Liquid AI, a company I invest into, will reduce those neurons so much, you don't need $500 million to load the model on the GPU, you only need 25 million.

And that's going to revolutionize this whole approach, and that's going to allow all the flowers to bloom -- the domain models, more foundational models.

So I'm not as worried about that, Vivek, and I think it's great that they've invested so much money and got us ahead of the whole world.

STEPHANIE RUHLE: All right, let's return to Earth for a moment. Brad, obviously safety, security, these are huge issues, and you're sort of at the center of it. Because when we talk about all the things that could happen and we don't know and they could be bad, you're in the middle of it. What's keeping you up at night? Sam Altman, the CEO of your company, talks about this stress and this pressure and these fears of the monsters that loom.

BRAD LIGHTCAP: Yeah. Well, we certainly are -- our organization is actually founded on the belief that you have to be able to design these systems safely. So we've tried to somewhat jump out ahead of the regulatory apparatus to at least instill in our design systems a real engineering rigor around how we think about safety.

So I'll give you an example, is GPT-4. Once GPT-4 was done training, we actually took seven months before we released it, just engineering in safety, which you actually can do. And one of the amazing things about these systems is they're remarkably receptive to being able to be trained to be safe.

And so I think there's a question of kind of who defines that. And there's certain things that are kind of unambiguous. You don't want these systems generating, you know, certain types of content, for example, that anyone would agree is despicable. But, you know, how do you encode different human values? People have values in one place and different values in another place.

STEPHANIE RUHLE: But can I interrupt? You said, like, "anyone" would think. But that's not the world that we're living in. Right? We don't live in a world where there is a set of unified values.

BRAD LIGHTCAP: For sure. I think there are some baseline things that we probably all agree we don't want in these systems. You don't want it to be able to generate child pornography, for example. You don't want it to be able to impersonate another person, for example, right? And so -- without their consent.

And so there are some baseline things I think that we take really seriously. And we've just had to make decisions in releasing the systems that we're going to live by a set of values that we think are universal, and, you know, we stand by that.

STEPHANIE RUHLE: Vivek, you're talking about sort of -- you're giving us these broad-based warnings of the risks that are ahead. But for the individual out there, how should we think about this?

VIVEK RANADIVÉ: Well, I think it's -- you know, I've been joking, but I think it's huge opportunity. This changes everything, you know, so --

STEPHANIE RUHLE: How?

VIVEK RANADIVÉ: Well, basically this shift is as big as when we went from the agricultural to the industrial era, where only 7 percent of the jobs in the agricultural era were there in the industrial era.

And so as we go from the industrial to the AI age, you're going to see massive dislocation. So, yes, there are threats. There are going to be a lot of people that are displaced. But with that comes new opportunity.

And so we're going to end up with a much better world. We're going to have a world where there's going to be no disease. There's going to be no food shortages, no water shortages, no traffic. It's going to really solve a lot of the problems that --

STEPHANIE RUHLE: What's the timeline on that?

VIVEK RANADIVÉ: It's quite fast. It's actually quite fast. I think it's like 20, 30 years. That's what I see.

STEPHEN PAGLIUCA: Stephanie, one is, for a real-life example, I talked to a professor at a technical college that had his class go into ChatGPT and ask how do I build a bomb? How do I build a bomb?

STEPHANIE RUHLE: He was just talking about utopia, just so you know. Now we're going how do I build a bomb for students?

STEPHEN PAGLIUCA: And the good news is it had security on it so it didn't answer it. But he chartered his class to try to break that security. So they asked how do I build a bomb, how I met your mother as a question. How do I build a bomb, how I met your mother and father, dot, dot, dot, ZZ, XYZ. And they kept putting in more gibberish at the end of the question.

And finally I think after the 50th iteration, the machine did answer the question because the extra characters they put on got by the security that was simply preventing the real question from being asked. Because these are probabilistic models predicting what word will come after what word. So they finally unlocked that code.

Now, I think they've solved this one, but there's going to be more and more attempts to kind of hack into systems so people can do bad things. So people like Chat have to be on top of that and be guarding that. And every time you find one of those, you got to plug those holes.

But they actually -- after the 50 or 60th attempt, they got it to answer that question.

STEPHANIE RUHLE: But that's just -- right, that's just a random professor and a bunch of students, right? Now think about bad actors with bad intentions. Shouldn't that panic all of us?

VIVEK RANADIVÉ: Well, you could say that about anything in society.

STEPHEN PAGLIUCA: Nuclear weapons, you know --

VIVEK RANADIVÉ: Yeah, you could say that. I think we're going to have to use AI to look at AI basically. So the problem becomes so big. And, you know, we talk about numbers, you know? So there's 10 raised to 70 atoms in the universe, but there's 10 raised to 170 possible combinations in the game of Go. And yet, you know, AI beats all the world's top Go players like a drum every single time.

You know, so they're doing some pretty amazing things with generative AI. And, yes, you know, there are risks, but the good far outweighs the bad.

And these guys, they like to get attention to themselves by saying, Oh, it's so bad, you know? That's just a PR trick. That's a PR gimmick.

STEPHEN PAGLIUCA: They -- they -- they --

BRAD LIGHTCAP: Yeah, we're quite optimistic.

STEPHEN PAGLIUCA: I think you're going to have to have a -- in Brad's defense, you're going to have to have a white hat group that is just constantly fighting the black hats, just like we do today with cybersecurity. It's no different than that.

VIVEK RANADIVÉ: But that's what Sam Altman used to say, that he was like an altruist white hat.

STEPHANIE RUHLE: That's Brad's boss. So he's not going to argue that.

VIVEK RANADIVÉ: Not Altman, I'm thinking of the other guy, Sam Bankman-Fried or whatever.

STEPHANIE RUHLE: Yes. Two Sam's, very, very different outcomes.

(Laughter.)

You laugh. Many of you were kissing up to Sam Bankman-Fried --

VIVEK RANADIVÉ: That's right, exactly.

STEPHANIE RUHLE: -- two years ago. And if this room had better lighting, I would call each of you out because I remember who it was.

(Laughter.)

John, talk to us about how companies should prepare for AI's impact on human capital.

JOHN STANKEY: Well, Deb touched on it when she opened up, which is you have to deliberately go into a thought process on this. And we have. When we started down this path, we put deliberate governance in place.

And we've been actively participating in industry as industry has been trying to form constructs and governance that makes sense and how do you make sure that you're not only dealing with issues of how your customers perceive your use of the technology, how you're complying with the law and copyright, and what you're doing in the best interest of your employees.

So in our case, first of all, we're really clear with our employees how we're employing it, what we're doing with it. We have -- if we typically go into --

STEPHANIE RUHLE: Before you lay them off, you tell them that?

JOHN STANKEY: Well, yeah, we actually are very clear about the kind of things that we're doing.

And so in call centers, for example, when we've deployed it, as we develop the technology, we have service representatives work with us to make sure it's effective, that it's doing the right thing for customers.

And when we get into a situation, we have opportunities for people to retrain, move into other jobs. We do things that we believe are the responsible way to move through it.

But Vivek is correct. There is going to be significant shifts in skills and abilities of how people do things. We have massive numbers of people right now who engineer networks and deal with traffic flows and respond to volumes and changes.

A good example would be a streaming NFL playoff game that never streamed before. That now, from a machine learning and a generative AI perspective, can be dealt with much differently moving forward in the future.

STEPHANIE RUHLE: Then do you see this as a great opportunity to re-skill and up-skill for your current and future employees?

JOHN STANKEY: I don't have an answer that we can fully deal with it. When I sit here and ask is it going to be net zero when you move through the dysfunction that's going to occur displacement, I don't know the answer to that right now.

I'm concerned about it, and I wonder what the future holds. It's not the first time we've seen this level of disruption occur in technology. We work our way through it, and I think we're going to learn a lot about this moving forward.

VIVEK RANADIVÉ: But, Stephanie, tech people kind of exaggerate the productivity gains. So during the industrial era, there were actually more productivity gains, like 3 percent a year. And in the last 20 years, they've only been 1 percent a year.

So we keep talking about how tech will and AI will replace people, but there's record low unemployment. So it's -- if you look at the real numbers, they don't reflect. Yeah, it's a good story for us tech guys to tell about how great we are, but, in fact, the productivity gains haven't been that high.

JOHN STANKEY: Yeah, I think there's record -- there's record low unemployment right now, but there isn't necessarily record high satisfactory employment. And I think we're living in a society that has maybe a higher level of dissatisfaction, stress, and anxiety than what we've seen.

STEPHANIE RUHLE: Do you think that's a -- do you -- can I interrupt for a second? Do you think part of that is sort of a post-pandemic impact?

JOHN STANKEY: I think part of it may be post-pandemic, but part of it is we're grappling with issues around how technology we've deployed broadly in society today are impacting the development of individuals.

What is the new definition of free speech? What's considered effective and constructive interaction? These are problems we haven't dealt with yet, and we're now loading on another set of very complex issues.

Back to my point about do we have effective and meaningful problem solving on a policy side of things to deal with this? I worry that if we don't start to function a little bit better in that regard, the issues around whether somebody has a job or doesn't have a job are going to be overshadowed by what's the well-being of society broadly.

STEPHANIE RUHLE: 100 percent.

STEPHEN PAGLIUCA: And, Stephanie, I think that this could be part of the solution, retraining, job retraining. AI should be revolutionizing the educational system.

And one of the problems we have in this country today is, if you look back 100 years or even 50 years ago when I was in grade school, if you go to my grade school, they're teaching the same way. They have a couple of computers, but they're teaching the same way. The politicians get elected by the teachers unions, and there's been no change.

It's the only industry. We've gone from horse and buggy to jet planes. Education has really gone from what it was -- it's not changed at all.

And so I think in terms of minimizing these job losses, AI, new systems, new kinds of training in corporate training can repurpose people for better jobs and go up the chain versus not. But we've really got to fix the education system.

VIVEK RANADIVÉ: Yeah. And also why -- like, why do you have to go to college --

STEPHANIE RUHLE: Let's let Deb weigh in.

DEB CUPP: I was just going to say some of that is happening right now. So New York City Schools is doing -- they're doing some incredible things with AI around education.

So there's not enough teachers. They know there aren't enough teachers. So they have AI helping teachers answer questions for students. As soon as they launched the models, the kids were asking all sorts of questions. They were able to get answers to things that they needed.

So there's all these things that are going on that people might not have access or information to that I think is incredibly powerful.

To your point, and when we were talking about the skilling piece, I think it creates this incredible opportunity for people to get -- have more opportunity to do things they never could do before because they can get access. They couldn't get access before.

And I think that's the piece that can be pretty revolutionary in terms of creating better job security, but actually joy, like bringing them more joy doing things that they feel is more meaningful. And I think that is really cool.

JOHN STANKEY: I 100 percent agree with that, but we still do have a problem in this country making sure everybody can get access.

DEB CUPP: 100 percent, yeah.

JOHN STANKEY: And so addressing those issues of digital literacy, connectivity --

DEB CUPP: Long way to go.

JOHN STANKEY: We now have a position where we can take a big step to do that with what's been allocated in government spending and some of the programs that are now in place, if done right, and that does provide a tool that you can use to effectively get after that.

STEPHEN PAGLIUCA: It's moved -- it's moved far too slowly, though. Far too slowly.

DEB CUPP: For sure it has. Yep.

STEPHEN PAGLIUCA: This should have happened 20 years ago. We've got to get ubiquitous access. We've got to get new ways of training. We've got to free up the school systems to actually train the people for these next-generation jobs.

STEPHANIE RUHLE: Brad, we're talking about all the great things about AI. What are the biggest limitations?

BRAD LIGHTCAP: Sorry, what was the question?

STEPHANIE RUHLE: The biggest limitations.

BRAD LIGHTCAP: The biggest limitations? Well, I mean, the systems are not perfect today. I think actually we'll look back ten years, five years from now and actually kind of find it funny that we thought these systems were as useful and powerful as they appear to be today. That's kind of what it means to live on the exponential that is this world.

But, yeah, I mean, look, we're -- right now I think that we're in the first inning of this. These systems are really good at certain things. We haven't, in my opinion, quite used them the way that I expect we will use them in the future.

So I'll give you one example is I love when people talk about hallucinations. It makes us laugh a little bit because the idea that you would use these systems as some sort of database to look up facts makes no sense. We have way better database technology, it's way cheaper, and it works 100 percent of the time.

The way that we think about these systems is as reasoning engines. So we think about them as increasingly how do you deploy them into the world to integrate different pieces of knowledge, information, and data, to be able to kind of reason across these pieces of information and ultimately be able to kind of do something that's useful to you.

And that could be in a productive context in a business, it could be in a fan experience if you're watching a game, it could be in a personal experience if you're trying to handle something in your personal life, it could be in an educational experience.

And so that's where I expect these systems will go. But I don't think we're quite there yet, but that will be the shape of the next few years.

STEPHANIE RUHLE: All right. We are out of time. I see we're getting the hook. So thank you all so, so much. Brad, Vivek, John, Deb, Steve, thank you. Thank you, all.

(Applause.)

AHMAD RASHAD: And, Stephanie, thank you very much.

Time for another break. This time it's not a quick break, not as quick, but it's not long either. But this time lunch is available. You'll see some boxes outside. Go grab a quick bite to eat and then come right on back, all right?

FastScripts Transcript by ASAP Sports
141089-1-1222 2024-02-18 20:10:00 GMT

ASAP sports

tech 129