Video Podcasts

Machines Like Us

Role: Video Editor | Client: Paradigm

Credits:

Mitchell Stuart - Producer

Taylor Owen - Host

Could an alternative AI save us from a bubble? (Gary Marcus)

Publish date: Dec. 2, 2025

DESCRIPTION:

Over the last couple of years, massive AI investment has largely kept the stock market afloat. Case in point: the so-called Magnificent 7 – tech companies like NVIDIA, Meta, and Microsoft – now account for more than a third of the S&P 500’s value. (Which means they likely represent a significant share of your investment portfolio or pension fund, too.) There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence. Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good. Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.) But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?

Can AI Lead Us to the Good Life? (Rutger Bregman)

Publish date: Nov 18, 2025

DESCRIPTION:

In Rutger Bregman’s first book, Utopia for Realists, the historian describes a rosy vision of the future – one with 15-hour work weeks, universal basic income and massive wealth redistribution. It’s a vision that, in the age of artificial intelligence, now seems increasingly possible. But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species. So if you’re building a technology that could either save the world or destroy it – is that a moral pursuit? These kinds of thorny questions are at the heart of Bregman’s latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.

How to Survive the “Broligarchy” (Carole Cadwalladr)

Publish date: Nov 4, 2025

DESCRIPTION:

At Donald Trump’s inauguration earlier this year, the returning president made a striking break from tradition. The seats closest to the president – typically reserved for family – went instead to the most powerful tech CEOs in the world: Elon Musk, Mark Zuckerberg, Jeff Bezos and Sundar Pichai. Between them, these men run some of the most profitable companies in history. And over the past two decades, they’ve used that wealth to reshape our public sphere. But this felt different. This wasn’t discreet backdoor lobbying or a furtive effort to curry favour with an incoming administration. These were some of the most influential men in the world quite literally aligning themselves with the world’s most powerful politician – and his increasingly illiberal ideology. Carole Cadwalladr has been tracking the collision of technology and politics for years. She’s the investigative journalist who broke the Cambridge Analytica story, exposing how Facebook data may have been used to manipulate elections. Now, she’s arguing that what we’re witnessing goes beyond monopoly power or even traditional oligarchy. She calls it techno-authoritarianism – a fusion of Trump’s authoritarian political project with the technological might of Silicon Valley. So I wanted to have her on to make the case for why she believes Big Tech isn’t just complicit in authoritarianism, but is actively enabling it.

AI Music is Everywhere. Is it Legal? (Ed Newton Rex)

Publish date: Oct 21, 2025

DESCRIPTION:

AI art is everywhere now. According to the music streaming platform Deezer, 18 per cent of the songs being uploaded to the site are AI-generated. Some of this stuff is genuinely cool and original – the kind of work that makes you rethink what art is, or what it could become. But there are also songs that sound like Drake, cartoons that look like The Simpsons, and stories that read like Game of Thrones. In other words, AI-generated work that’s clearly riffing on – or outright mimicking – other people’s art. Art that, in most of the world, is protected by copyright law. Which raises an obvious question: how is any of this legal? The AI companies claim they’re allowed to train their models on this work without paying for it, thanks to the “fair use” exception in American copyright law. But Ed Newton Rex has a different view: he says it’s theft. Newton Rex is a classical music composer who spent the better part of a decade building an AI music generator for a company called Stability AI. But when he realized the company – and most of the AI industry – didn’t intend to license the work they were training their models on, he quit. He has been on a mission to get the industry to fairly compensate creators ever since. I invited him on the show to explain why he believes this is theft at an industrial scale – and what it means for the human experience when most of our art isn’t made by humans anymore, but by machines.

Geoffrey Hinton vs. The End of the World

Publish date: Oct 7, 2025

DESCRIPTION:

The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world. While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI’s most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize. I think it’s fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton. But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life’s work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious. But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada’s, seem reluctant to get in the way. So I wanted to ask Hinton: If we keep going down this path, what will become of us?

AI is Upending Higher Education. Is That a Bad Thing? (Niall Ferguson & Connor Grennan)

Publish date: Sep 23, 2025

DESCRIPTION:

Just two months after ChatGPT was launched in 2022, a survey found that 90% of college students were already using it. I’d be shocked if that number wasn’t closer to 100% by now. Students aren’t just using AI to write their essays. They’re using it to generate ideas, conduct research, and summarize their readings. In other words: they’re using it to think for them. Or, as New York Magazine recently put it: “everyone is cheating their way through college”. University administrators seem paralyzed in the face of this. Some worry that if we ban tools like ChatGPT, we may leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we could end up with a generation capable of producing work – but not necessarily original thought. I’m honestly not sure which camp I fall into, so I wanted to talk to two people with very different perspectives on this. Conor Grennan is the Chief AI Architect at NYU’s Stern School of Business, where he’s helping students and educators embrace AI. And Niall Ferguson is a senior fellow at Stanford and Harvard, and the co-founder of the University of Austin. Lately, he’s been making the opposite argument: that if universities are to survive, they largely need to ban AI from the classroom. Whichever path we take, the consequences will be profound. Because this isn’t just about how we teach and how we learn – it’s about the future of how we think.

Douglas Rushkoff Doesn't Want to Talk About AI (2024)

Description:

Douglas Rushkoff has spent the last thirty years studying how digital technologies have shaped our world. The renowned media theorist is the author of twenty books, the host of the Team Human podcast and a professor of Media Theory and Digital Economics at City University of New York. But when I sat down with him, he didn’t seem all that excited to be talking about AI. Instead, he suggested – I think only half jokingly – that he’d rather be talking about the new reboot of Dexter.

Rushkoff’s lack of enthusiasm around AI may stem from the fact that he doesn’t see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies – more akin to radio or television than fire or electricity.

But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the “California ideology” has shaped artificial intelligence, and why it’s not too late to ensure that technology is enabling human flourishing – not eroding it.

Full Episode:

Social Media Clips:

Why Journalism Made a Devil’s Bargain with Big Tech (2024)

Description:

Things do not look good for journalism right now. This year, Bell Media, VICE, and the CBC all announced significant layoffs. In the US, there were cuts at the Washington Post, the LA Times, Vox and NPR – to name just a few.

One of the central reasons for this is that the advertising model that has supported #journalism for more than a century has collapsed. Simply put, #Google and #Meta have built a better advertising machine, and they’ve crippled journalism’s business model in the process.

It wasn’t always obvious this was going to happen. Fifteen or 20 years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks. But these turned out to be Faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.

And now we might be doing it all over again with a new wave of tech companies like OpenAI.

Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. She’s written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.

She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasn’t the best idea.

Now, she’s ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.

Full Episode:

Social Media Clips:

Next
Next

Editorial Video