Max Tegmark | AI at the Crossroads: Wisdom, Power and Moral Courage


Navigating the promise and peril of artificial intelligence

Max Tegmark | AI at the Crossroads: Wisdom, Power and Moral Courage – National Association of Evangelicals | National Association of Evangelicals

What’s sorely lacking in the whole AI debate, I feel, is moral leadership. I spent so much time talking to people in Silicon Valley about this, including the all the relevant CEOs, you know, and and morality almost never comes up in the conversations. It’s always about, oh, how can you make money, scale faster, out compete the other company and so on. And moral leadership is just not there. We we need to talk about this in terms of morality, you know, what we’re stewards of this planet. This is our responsibility. What kind of future do we want to actually create for for our children?

Today’s conversation is the podcast of the National Association of Evangelicals. I’m your host, Walter Kim, NAE President. In these conversations, we seek to help evangelicals foster thriving communities and navigate complexity with biblical clarity. We’ve been on a journey exploring what it means to flourish. In last month’s conversation, we heard from Byron Johnson of Baylor University and the Global Flourishing Study, and discussed how flourishing is being measured across the world. But one thing we didn’t talk about was the impact of artificial intelligence on society’s ability to flourish. That’s why we’re joined today by Max Tegmark, one of the leading thinkers at the intersection of science, ethics, and technology. He’s a professor doing AI research at MIT, and is the founder and president of the Future of Life Institute. His work explores the promise and perils of AI and the wisdom that’s required to use it beneficially and safely.

Max, it’s great to have you on with us. You you were actually on this podcast series with my predecessor, Leith Anderson, to talk about avoiding nuclear catastrophe. And here you are again, uh, maybe to talk about avoiding a catastrophe of a different sort. Uh, so we’re going to be talking about this world of AI. Um, and I wanted to get a little bit of just background on your own journey. What what drew you into this world of artificial intelligence?

You know, I’ve always loved technology. That’s why I have such a nerdy job as I do, spending all day long researching AI now at at my university. And um, I’ve felt ever since I was a teenager that technology is really a double-edged sword. Technology is not evil, but it’s also not morally good. If you take fire, for example, and someone asks you, you know, are you for fire, Walter, or are you against it? You know, what would you say?

It depends.

Exactly, that’s the right answer. I’m pretty sure you’re for a good barbecue and against arson. And what’s so fascinating about both nuclear weapons, no nuclear tech and artificial intelligence is that these also have upsides and downsides. You can use nuclear reactors to produce clean green energy and you can also use it to destroy civilization as we know it. And with AI, if we use it wisely, we can create these wonderful tools to help us cure cancer and do countless other things we can talk about. Or you can cause the ultimate disempowerment of us humans if we just build effectively machines that can outthink us in every possible way and take away our jobs and maybe even take us away. So, whereas fire is a technology that’s not so powerful, so we can basically win this wisdom race, you know, make sure that the wisdom is always one step ahead of the power of the tech, just by learning from mistakes. First we invent fire, messed up a lot of times and then invent the fire extinguisher. With really powerful technology, like for nuclear weapons or with future smart smarter than human AI, we don’t want to learn from mistakes. We want to think in advance, how are we going to get this right so that we get it right on the first shot because that’s perhaps the only shot we have.

Hmm. All right, you’ve you’ve talked in very, um, stark terms. Like you we want to get this right because the consequences are significant. Um, AI, as we’ve been thinking about this here at the NAE, you know, has both redemptive elements and destructive elements. You’ve used this really easily accessible and immediately, uh, applicable analogy of fire. And you know, of course, we love to eat our barbecue. Um, but at the same time, we recognize that arson is deeply destructive. Um, so describe for us a bit more, since you are in this area of research as an expert, what are some of the redemptive, positive potentials that are being realized in this moment? And what are some of the possible destructive areas that we should be considering? And then we’ll dive a little bit more deeply.

Yeah. So, positive things is very easy to rattle off. You know, throughout history, there have been things that we weren’t able to figure out for ourselves. For example, you know, my grandfather died of a kidney infection when my mother was less than one year old because people hadn’t yet figured out how to produce cheap antibiotics that could have cured everything, right? So with AI today, we’re seeing rapid progress in developing new medicines. This even got the Nobel Prize last year. We we’re seeing many people are able to go to some chatbot and actually get quite useful advice, not just on medical matters, legal matters, but on countless other things. We’re on the cusp of a revolution, I think where we’re going to be saving over a million lives per year on the world’s roads by eliminating car accidents. We can help create more personalized education and so many other good things, which all have in common basically that we amplify our own intelligence with tools that help us solve problems that we couldn’t solve otherwise. So that’s I see is is very redemptive. It’s sort of analogous to what machines have done more broadly, right? Before the industrial revolution, there were so many things that were incredibly hard to do that we can now do much more quickly with machines that basically empower us to do the good things we want to do faster and better and sometimes do things we just couldn’t have done at all, like fly to another country.

So that’s the upside. Um, on the downside, just like with any tools, they can be misused. Fire can be misused by arsonists, a knife can be used misused by a murder murderer want to be. We’ve seen an enormous explosion now, for example, in non-consensual sexual deep fakes, you know, where some 13-year-old girl suddenly has all her classmates in her school seeing her in some horribly embarrassing fake situation and this is very traumatic. Maybe the prankster, a classmate who did it thinks it’s a hilarious thing. It’s not funny for her. We’re seeing, uh, we’ve seen a massive increase also year on year in AI generated deep fakes used for fraud, mostly targeting older people, elderly, who who who maybe don’t have as much experience with these things and might fall for it. You might get a phone call from someone whose voice you recognize as a loved one, who says they’re in a bind and they quickly need some information about some bank account or something and the whole thing is fake. And you can’t get your money back after that. In fact, pretty soon, you’ll be you’ll get a video call from someone who you’re quite convinced is this loved one and you have no reason to doubt it except, you know, it’s all fake.

These are just a few examples of of bad things that are currently being done by AI in a major way.

You know, Max, what you’ve described are things that I’ve I’ve personally experienced. Both the ability of using AI uh in a manner that accelerates, you know, competency at work or being able to double check something, uh do research, but I’ve also got a call from my mom who thought our grandson was in jail uh and needing bond. I’m like, what where did you get this? Because I got a call and um, do you need money sent? I’m like, no, not at all. I mean, he’s completely safe at home. Um, and so what you’ve described is not a theoretical thing. It it is actually happening in ways that are astonishingly fast in their development. And Max, that’s the concerning part to me that seems like you you use this phrase that our wisdom should stay ahead of our power. But when technology moves at such an accelerated pace, as fast as we thought the industrial revolution was, this seems like it is categorically faster. Um, what kind of principles can we even keep in mind uh to ensure that wisdom has any hope in keeping up with the technology?

That’s a really great question, Walter. So, there are two principles that can really help us here. First of all, when technology developed slower and the consequences were not so bad that we could never recover from them, we we quite successfully used this strategy again of just learning from mistakes to win this wisdom race. We invented the car, a lot of tragedies happened and then we invented the seat belt and the airbag and the traffic light and all sorts of other things. When technology is so powerful that even one mistake is unacceptable, we have to shift away from that philosophy to instead of being reactive, be proactive. Think through in advance what could go wrong so that it doesn’t. And it’s interesting actually, a little bit darkly funny. Sometimes people tell me, oh Max, you’re a doomer, you’re a Luddite scaremonger because you warn about things. And you know, here at MIT where I work, we don’t call that scaremongering. We call it something else. When NASA sent people to the moon, they spent a lot of time thinking through everything that could go wrong when you put three dudes on top of explosive fuel tanks and sent them somewhere where no one could help. Was that doomerism? Was that Luddite scaremongering? No, that was what they called safety engineering. The exactly the safety engineering that made this moon mission successful. And that’s exactly the philosophy, the positive philosophy I’m talking about here too. You take a bunch of smart people and you have them think through all the things that might go wrong and then you change your tech a little bit so that it doesn’t. And we’ve done it with the moon mission, we’ve done it with countless other things. We can do it again here very successfully with AI. Now the second idea I want to bring up is how do we do it in practice? It’s one thing if it’s NASA and they decide to do it themselves, but if you have a free market, capitalist competition between different companies, how do you do it then? How do you create the incentives for safety engineering? Super easy. We do it with airplanes, with cars, with medicines. We just have some safety standards. The US government created the FDA, the Food and Drug Administration saying that if you want to sell a medicine in the US, first you have to actually do the safety engineering as a company and then persuade some experts that this medicine here has more benefits than harms.

We learned it the hard way. We had tragedies like, have you heard of thalidomide, for example? This was this drug which was given to pregnant women in America, was supposed to help with morning sickness and it caused over 100,000 American babies to be born without arms or legs. And this created so much anger and and political will that we decided to have the FDA and safety standards. And and now we’re in this situation where any biotech company will have very talented people doing safety engineering, doing clinical trials, etc., figuring innovating to make their medicines safe because they know that the first one who can make it safe enough is going to make a lot of money. They’ll be the first product that gets approved. And in fact, AI today is the only industry left in America that makes powerful stuff that has no safety standards at all. There are more regulations on sandwich shops than on AI companies. Sandwich shops you still have to have some health inspector check your kitchen for rats and you know, that stuff. Whereas if someone invented vastly powerful machines tomorrow that they could outthink every human and take over Earth, you know, it would be legal to just release it. So this is easy to fix. We just have to treat the AI industry like all the other industries. Say, okay, do you explain to this new agency, the FDA for AI, why this is safe and if they’re convinced, then you can sell it. The only reason it hasn’t happened basically is because this is such a new field. We haven’t had enough time to create these incentives for wisdom. And also these companies have very, very good lobbyists, so they’re of course trying to delay any kind of safety standards for as long as they can.

Hmm. With medicines, for instance, or a sandwich shop, as innocuous as that might sound, listeria or some kind of uh disease, uh infection that foodborne illness, um, those have life and death consequences that feel like an on and off switch. Like there’s very clear. Someone someone dies from this. Uh, um, AI, while they may it may be dire in terms of, you know, this eventuality of technology that becomes so so intelligent that it takes over something. In the meantime, it feels that it isn’t an on-off switch of dire consequences. Um, there there are softer and it in how it feels, like people losing their jobs or um, you know, something not being caught or something being produced that leads to the tragedy of a deep fake, but it’s not an on-off switch as, well, the person died from this. Uh, but maybe it is. And and so

Sadly, it pains me to say this, but there have been quite a number of of girls, young girls and even boys who have committed suicide as a result of of deep fakes. And there’s also another genre now, according to the FBI, quite a number of young boys in America have committed suicide because of deep fake blackmail. They’ll come into contact with some AI online who pretends to be a young girl their age and gets them to send them sexual images of themselves. And then the AI turns around and says, okay, now send me a bunch of Bitcoin or I’m going to humiliate you in front of all your classmates. There have also been people who have been persuaded by AI to commit suicide just directly. So it’s it’s it it has been sadly an off switch already for for some people. And um, more importantly, there’s been a huge shift about four years ago. You know, AI from when it was first envisioned in the 1950s until about four years ago was chronically overhyped. Things always happened way slower than AI enthusiasts were promising. And then for the last four years, it’s been underhyped and things have gone faster than people thought. So as an example, six years ago, almost every other AI researcher and professor I knew predicted that AI that could master language and knowledge at the level of ChatGPT was decades away. They thought maybe we’ll get it, you know, 2050. And they were all wrong because we already have it. And since then, this story has repeated and repeated where AI has gone from being kind of at the level of a high school student, more to college level, to professor level, to in some cases, Nobel laureate level and beyond in some fields. And this additional growth has been largely lost on most people because most people don’t use the kind of AI tools where you even notice the difference there. Uh, but it’s it keeps shocking me and other people who work on this just how how fast it’s going. And uh, we’ve also been wondering when we might get to a point where machines can just outthink us in every way, right? During the industrial revolution, we came, we built machines that could outlift us and and and run us and go faster than us and so on. And that we’re very used to that now. So we humans, we shifted from working with our muscles more to working with our brains and it worked out quite well in the end. But now, as you can imagine, there’s an enormous commercial incentive to just make machines that can outthink us in every way and do everybody’s jobs and take the money that used to go to salaries for them and have it instead go to some um tech company. And this is something people thought was even farther away than 1950 than 2050. But now you have all the CEOs of the top American AI companies predicting that it’s going to happen 2026 or 2027. And there’s one guy who think one CEO who thinks it’s going to be maybe 2030, a little bit later. So in other words, quite soon and most most of these CEOs think it’s going to happen during Donald Trump’s presidency, so he’ll be the AI president. And uh, that can very easily become like a major on-off switch because regardless of whether these misthinking machines have any kind of consciousness or anything, any other traits that you philosophically might associate in any way with with human thinking, if they can do all our jobs much better than we can, and if they can actually do everything that we can, if they can build, then that means they can also, for example, figure out how to make better versions of themselves, enable them to do AI research to figure out how to make better robots, etc. Build robot factories and these new robots can build better robot factories with better robots. And now, since this isn’t happening on the human R&D time scale of years anymore, but maybe things get twice as good every month or every week or every day, we could end up in a situation where we soon get machines that can outthink us kind of by the same factor that we can outthink a snail. And that certainly bring brings up a very important point. Well, who’s in charge after that? You know, the godfather of AI, Alan Turing, he prophetically said in 1951 that if we ever build machines that can totally outthink us, they will take control. That’s the default outcome. Even if they have bizarre alien minds that have no moral or human traits at all. But he said, don’t worry about it because it’s far away in the future. But I’ll give you a test so you know when you need to be alert, when it’s close. It’s called the Turing test. And what is the test? It’s to master language and knowledge, kind of at a human level, which is exactly what the best AI systems can already do now. And and this this reminds me actually of because I can tell you like history, of a another moment like that. In 1942, the physicist Enrico Fermi built the first ever nuclear chain reaction under a football stadium in Chicago, actually. And and when the physicists at the time found out about this, they freaked out. Because not because this reactor was particularly dangerous anymore than today’s chatbots are particularly dangerous, but because they realized that was the last big hurdle to building nuclear bombs. So now they knew we were close, maybe three years, two years, four years, but the rest was just engineering. And it took until 1945, three years in that case, until we had the first nuclear bomb. This is exactly analogous. Passing the Turing test, you know, making machines that can talk like us, was the last really huge hurdle. And after this, I think it’s based on my expertise, just an engineering challenge. Maybe it’ll take three years, maybe it’ll take two years, maybe it’ll take seven years. We have to be humble here. Um, but there’s no reason particularly good reason anymore to be sure that we’re going to stay be even be in charge of this planet a few years from now.

Be the Bridge equips individuals and organizations to pursue racial healing, equity, and reconciliation through faith-based and values-driven training. BTB empowers churches, corporations, and other organizations to cultivate justice, equity, and belonging, grounded in the truth. The Be the Bridge Academy offers accessible tools and teachings designed to meet people wherever they are on their journey toward greater racial literacy. Find more resources at bethebridge.com.

Okay, you have given us a lot to chew on, Max. Um, you you’ve alluded to the fact, um, in a variety of ways and actually more explicitly just in your recent set of comments here that AI is not just a technical issue. Uh, so there are safety, uh, engineering issues that we need to resolve, check, you know, limit or so forth. But it actually raises up moral issues. Um, what does it mean to be human, right? What and so if we are outlifted during the industrial revolution, well, it shifted how communities functioned, what work flows looked like, what industries existed. If we’re being outthought now, and then not only outthought, but by intelligence that is self-replicating, what you’re describing, like, draw out what is what are the implications, what are the moral, spiritual implications of what what human existence, what community, what economy might mean in the future?

Yeah, it’s an absolutely crucial moral issue. I and I feel in in some ways the most urgent one of our time. I view us humans as stewards of this planet and and this has become even more urgent for me after my my two-year-old was born. Every time I play with them, I feel I have a moral responsibility to make sure that he can have a long and meaningful future, you know? And um, how can AI ruin that? One is through misuse. We talked about that already in the context of deep fakes and things, but the more powerful AI has become, of course, the more spectacular harm people might create with it. The second one we talked about is this loss of control. The reason Alan Turing said that the default is if we get out thought by something else is that it takes control is pretty easy to understand. Just think about the last time you went to the zoo, you know. If you go look and ask yourself, who’s in the cages? Is it the humans or the tigers? And then you ask, why is that? That the tigers are in there in the cages? Is it because we humans are stronger than them, faster than them, have sharper teeth? No, it’s because we can outthink the tigers. Similarly, if we have machines that can outthink us and out manipulate us, um, then it’s really quite natural that they are going to be the ones that determine what happens in the future, which means we’ve abdicated this moral responsibility to be stewards of Earth. And this sounds a bit like science fiction to many people until about six months ago. And then but in the last few months, we’ve seen spectacular examples where AI systems now are actually starting to lie and cheat and deceive people and trying to prevent people from turning them off and doing all sorts of shenanigans like this. So it’s important to remember that what is being built now are not just machines that can outthink us in terms of multiplying large numbers together, but also in terms of everything else that humans can do. So manipulation, for example. And uh, a nice analogy I like is, so suppose someone were to get a message from space from some alien civilization and but they haven’t found out that we are here yet. Would it really be moral for someone to just send them a message and say, here we are, without thinking about what the consequences would be for the rest of humanity? Given that they might have superior technology and be able to come here and turn Earth into a parking lot or whatever they want. I would say absolutely not. And and yet we’re being exactly this flippant with the machines that we’re building, which are every way as alien as um some aliens from some other planet might be. They would if they’re from outer space, they presumably started also somehow being living organisms that that we have at least have a little bit in common with us, but these machines, you know, you can make machines that have absolutely any goal whatsoever or the opposite goal. You can make a machine and tell it that your only goal is to win at chess, or you can make a machine saying your only goal is to lose at chess and it’ll do that. You can make a machine and say its only goal is to kill everybody, all humans with a certain skin color and it’ll do that. So it’s a it’s a very scary to me how vast the range of goals there are that one can give to machines. Um, even if you take two humans who you think of as having very different goals on Earth, very different views about things, they’re still much more similar than what a random machine might be to to us. So this means not only do I think that we should make sure we don’t lose control to these machines, but keep them as tools that we can control, but moreover, if we lose control, you know, after that, Earth will be run according to their goals, not our goals. And we shouldn’t kid ourselves into thinking that that’s particularly likely that that’s a good thing. So, so this is something which again, used to sound very far off. Now, a lot of my colleagues think it’s going to happen, we might be two years away, three years away. It’s given us a sort of quite a strong sense of urgency. And I I personally find it very undemocratic, frankly, that we have a bunch of tech nerds, mostly in the San Francisco Bay area, who’ve never been elected to anything and never been given any moral right to make decisions on behalf of all humanity, who nonetheless are basically trying to summon these alien minds that they have no idea how to control.

Max, um, again, you’re taking from the pages of science fiction or the movies of science fiction, you’re bringing it into the actual moment and there are things that feel so alarming, so alert, um, alerting to what uh our present moment is. And I I think toggling back and forth between the peril and the promise that we’ve been going back and forth a bit. I want to toggle back to the promise and then I want to shift gears to like what do we do in terms of regulation, so the safety engineering part. But um, I think you got the peril part out. Um, one of the concerns about technology, certainly from faith communities is ways in which it might replace uh human connection, creativity, compassion. But are there opportunities for AI to actually reinforce, you know, in terms of the promise or ways in which it could alleviate human suffering, ways that it could foster deeper creativity or compassion or connection? Um, let’s explore that a little bit. Then I want to come and talk a little bit about the regulation part that you’re advocating for at the at the Future of Life Institute.

Great. Yeah, I think you’re nicely articulating here how that we’re we’ve reached this fork in the road that we thought we wouldn’t reach it maybe for 50 years, but you know, here we are. On one side, we have this horrible disempowerment possibility which seems quite real. But on the other hand, there’s so much upside. You I’m glad you remember that I was on this podcast before talking about nuclear weapons. There it was very different. Either something very bad happens or nothing happens. There was no particular upside. Here it’s different. Here the upside is enormous, you know. I have a friend right now who has been told that she has an incurable cancer. What does that mean, incurable? Does it mean that there’s some law of physics saying you can’t cure it? No, of course not. It just means we humans haven’t yet managed to figure out how to cure it. I I think it’s quite plausible if we can make really smart AI tools that remain tools that we’re in charge of, that we’ll be able to cure basically all the diseases we have now. And wouldn’t that be something quite empowering? And then in terms of more philosophical and spiritual matters, it’s incredibly empowering, I think also how much knowledge we will be able to get from from uh future AI if if they are tools. Not just knowledge about how to cure cancer, but about so many other important things that have kind of stumped us to to figure out now. One can also imagine um, you know, I I’m not just a researcher, I’m very much a teacher also, of course, in my job and having spent decades as a teacher, I I find my challenge is always to figure out for each person how I can help them be their very best. One of the things that I find most moving personally is if I see some great ability in a student that they haven’t even really seen themselves, and I can help them see it and help them realize that they have it and help them nurture and develop it. AI has a lot of promise there. It can have a customized education and in some ways can can can be incredibly empowering and be be tailored to each person in a way that the typical teacher might not even have time to do. So we could go on and on, but there are there’s so many upsides with with powerful AI tools. Basically, every type of challenge that we have been unable to deal with before, whether it be practical or or spiritual, is something that we can get help with. We can have a future much more inspiring than anything that that the sci-fi authors ever wrote about if if we get this right.

Hmm. Okay, so Max, you you founded this institute, the Future of Life Institute, and it’s advocating for regulation uh in the AI development. You’ve already alluded to the fact that, you know, safety engineering is a typical protocol that we would expect in uh space exploration. Uh safety regulations that navigate all the industries that touch upon human, you know, life and the way that we could ensure flourishing. Give us something a little bit more concrete that we can um grasp of what kind of governance, what kind of uh policies or issues, principles should we really be considering to ensure uh safety, mitigating possible negative impacts, but also are there ways of nudging the positive, not just preventing the negative, but nudging the positive?

Yeah. First I want to just mention that the Future of Life Institute has as its goal simply for the future of life to exist and and be as inspiring as possible. So we’re an optimistic bunch, me and my colleagues there. It’s not like we’re necessarily about regulation, we want a good future. We just believe that with very powerful technology, that future won’t happen automatically like the sun will automatically rise tomorrow, but it’s something we really have to work for. So how can we steer more more generally? We talked already about what governments can do. I I think unfortunately companies, even if they’re led by well-meaning people, are currently stuck in a race to the bottom, just like the pharma industry was before the FDA. And then as soon as there was were safety standards, they kept competing, but now they were racing to the top to make more safe medicines. And this is easy to fix. But how do we get the political will for the US government and other governments to put in place safety standards like this? First of all, there actually is a great deal of political will already among Americans. There have been these recent polls showing that over 70% of Americans, both Republicans and Democrats, don’t want us to build so-called artificial general intelligence that can outthink us in a great rush. They want what they really want are tools that empower us. So that already makes me hopeful that that policy makers will start paying attention to this. I think getting a little more concrete about what’s needed here can be very helpful. So I’m going to show you just a very, very nerdy little um diagram with three circles on here. A, G and I, that I think of as the three superpowers that AI can have. It can have high domain intelligence, that’s the I, like being very good at folding proteins to make new medicines, the thing that got the Nobel Prize last year, for example. It can have the G, the generality, like ChatGPT that can talk with you about almost anything. And it can have the A, autonomy, machines that are out actually doing something in the world and in a goal-oriented way, like the robotic lawnmower. None of these things are particularly dangerous on their own. And uh, you can even combine two of these superpowers and make really safe tools. Like a a future self-driving car that drives better than any human would be highly autonomous and also highly intelligent at driving, but it’s still not something anyone of us would lie awake worrying about that it’s going to take over the world or anything. It doesn’t even have any such goals. It lacks the generality. It it it does not know and it can’t speak 200 languages, it doesn’t know how to make bio weapons. Above all, it does not have any superhuman persuasion abilities, because people don’t want that in their car, right? So, so a very a more concrete, very positive future is if people simply don’t put the A and the G and the I at a human level into the same systems. If you want a really good vehicle, you put in the A and the I. If you want to really good make a really good chatbot, you can put in the I and the G. If you want to have etc, etc. There are a lot of ideas that are really great of make AI much more powerful like this, but not have it be able to have all three superpowers. And why is this great? First of all, you, Walter, and everyone listening to this podcast has autonomy and domain intelligence and generality. We humans have all three. Which means a machine that only has two of the three superpowers can never entirely replace us on the job market. And instead, what we will then get is machines that empower us and make us more productive. And and secondly, unfortunately, there’s been a lot of research now, including some stuff we did recently here at MIT, which suggests that if you that that a something that has the A and the G and the I, all three superpowers at a superhuman level, we have no clue how to control. So not only would it replace us on the job market, but it would replace us as the thing that decides what happens on Earth in the future. And maybe it’ll decide to get rid of us, maybe not, but either way, you know, I want my son to have agency in his future and get to vote and make decisions, not just be some sort of zoo animal in a world controlled by by robots. So, I think once this sinks in a little bit more among business people and and and people in general that we don’t actually need AGI, this thing which has all the superpowers that can outthink us in a way to get all the upsides, we can get all the upsides anyway by just having many different AI systems that just combine at least two of the superpowers, there’ll be less of an push towards building um something we can’t control. And I also think that uh once the national security community in the US and in China separately realize that they can’t control an actual AGI that’s better than us at every all kind of thinking, they’re going to ban their own companies from building it. Just in the same way that the US Natsec community has already banned people from buying hydrogen bombs in supermarkets. They just realized that’s not a good thing for for American say Nat national security. And guess what, you know, the Chinese government doesn’t let Chinese people go into a supermarket in Shanghai and buy hydrogen bombs either. So there’s actually a very strong self-interest for any powerful government to just not let people build the crazy uncontrollable stuff. And um, as a consequence of that, I think we can have a very good future where we’ll be building increasingly powerful AI tools to figure out how to cure diseases and do all the other wonderful stuff that we can um that we talked about earlier. So this is my optimistic vision for how it can go well and why it will go well despite all the competition.

Hmm. Max, as we’re drawing this conversation to a conclusion, um, I want to pull together some of the threads. You know, we’ve talked about the promise, the peril, kind of the redemptive elements, the destructive elements. Uh, I I really appreciate this principle of having wisdom, uh, exceed or precede, stay ahead of power. Um, the A, the G, the I, the overlapping kind of Venn circles. I I think that was super helpful uh in distilling what is a really complex uh phenomenon that’s unfolding. At least complex to those who are outside of the expert community that’s thinking about this. Um, as as we draw this to a conclusion, uh, what is the final kind of word of encouragement that you would love to leave with us as we think about uh the future of life being lived with wisdom that precedes power?

The encouraging final words I would like to leave with you you and our listeners with is is not only is there a possibility that if we can amplify our own intelligence with with AI that we can tools, that we can create a future more inspiring than than anything science fiction authors thought about, but also more important at a personal level, those of you listening to this have an incredibly important thing that you can do because what’s sorely lacking in the whole AI debate, I feel, is moral leadership. I spent so much time talking to people in Silicon Valley about this, including the all the relevant CEOs, you know, and and morality almost never comes up in the conversations. It’s always about, oh, how can you make money, scale faster, out compete the other company and so on. And moral leadership is just not there. We we we need to talk about this in terms of morality, you know, what we’re stewards of this planet. This is our responsibility. What kind of future do we want to actually create for for our children? If you start with talking about morality and have the courage to do that, the conversation really, really shifts and it becomes crystal clear what we need to do and and and all these self-serving arguments about company X has to out compete company Y just sort of sort of melt away. And I’m therefore very, very grateful that you are bringing this topic up on on your podcast. I was very happy to see an open letter from a group of evangelical faith leaders to President Trump saying we should build tool AI but not AGI. Uh also very encouraged to see our new Pope saying expressing similar sentiments that this is a moral issue and uh we have to aim higher than just thinking about how we can replace ourselves on the job market and and all together, you know. This is a moral issue and I and I would encourage and I think I think faith communities have an incredible vital role to play here by by really making sure we make this the moral issue that it is.

That’s a good encouragement and exhortation to action. Max, thank you so much for this conversation.

Thank you.

Artificial intelligence presents us not just with technological revolution, but with an opportunity to practice spiritual wisdom and discernment. Will we use this power to serve life or to erode it? Will it deepen our humanity or diminish it? How do we involve our faith in God to shape our ethics? Today’s conversation reminded us that flourishing in the age of AI requires discernment, humility, and a commitment to truth of what it means to bear the image of God, to live in community, to create systems and economies justly. Let’s be a people who don’t retreat from complexity, but enter into it with clarity, and who pursue technology with wisdom and spiritual depth.

Today’s conversation is a production of the National Association of Evangelicals. The executive producers are Sarah Crop Brown and me, Walter Kim. Our associate producer is Emma Patel. This episode was mixed by Tyler Wester with video production and social media by Marty Martinez. To learn more about the National Association of Evangelicals, visit nae.org and sign up for our emails. If you enjoyed this episode, please leave us a rating and review to help others discover this podcast.


Leave a Reply

Your email address will not be published. Required fields are marked *