What does it mean to keep wisdom ahead of rapidly advancing technology? In this episode of Today’s Conversation, Max Tegmark, professor and AI researcher at MIT and co-founder of the Future of Life Institute, explores the urgent question of how society can ensure artificial intelligence empowers — rather than endangers — human flourishing.

In this episode, Max Tegmark joins NAE President Walter Kim to unpack the profound opportunities and alarming risks posed by AI. Drawing on vivid analogies, historical lessons and real-world examples, they examine why the stakes for wisdom, foresight and moral courage have never been higher.

In their conversation, they discuss:

  • Why wisdom must stay ahead of technology’s power — and what happens when it doesn’t;
  • How past successes in safety engineering, like airbags and the FDA, offer a roadmap for responsible AI development;
  • The urgent need for faith communities to raise AI as a moral issue, not just a technical one; and
  • How re-centering the conversation on stewardship and responsibility can shift priorities toward the common good.

Subscribe today wherever you listen to podcasts.

Do you like the podcast?
Give us a 5-star rating on Apple Podcasts and leave us a review. This is the best way for others to discover these conversations. If you listen on Spotify, give us a follow and hit the notification bell to be sure you never miss an episode. And don’t forget to pass your favorite episodes along to colleagues, friends and family.

Resources


Today’s Conversation is brought to you by Be the Bridge.

 

 

Read a Portion of the Transcript

Walter: What kind of principles can we keep in mind to ensure that wisdom has any hope in keeping up with the technology?

Max: That's a really great question, Walter. So there are two principles that can really help us here. First of all, when technology developed slower and the consequences were not so bad that we could never recover from them, we quite successfully used the strategy again of just learning from mistakes to win this wisdom race. We invented the car. A lot of tragedies happened. And then we invented the seatbelt and the airbag and the traffic light and all sorts of other things.

When technology is so powerful, even one mistake is unacceptable. We have to shift away from that philosophy to instead of being reactive, be proactive.... When NASA sent people to the moon, they spent a lot of time thinking through everything that could go wrong when you put three dudes on top of explosive fuel tanks and sent them somewhere where no one could help. Was that boomerism? Was that Luddite scaremongering? No, that was what they called safety engineering. Exactly the safety engineering that's made this moon mission successful.

And that's exactly the positive philosophy I'm talking about here, too. You take a bunch of smart people and you have them think through all the things that might go wrong, and then you change your tech a little bit so that it doesn't. And we've done it with the moon mission. We've done it countless other things. We can do it again here very successfully with AI. Now, the second idea I want to bring up is...