Dominic Ashworth lectures on marketing strategy at Melbourne Business School. Grant Whitfield spent 25 years at Ogilvy before becoming a writer. They disagree on almost everything except wine.
TLDR
Project Hail Mary's premise, that an alien species developed nearly identical technology to humans, reflects convergent evolution in intelligence. This has implications for AI: if organic intelligence hits a ceiling, does artificial intelligence hit the same one? Elon Musk puts human extinction odds at 20%. Geoffrey Hinton says 10-20%. The film's answer, that cooperation between different forms of intelligence is the only path forward, may be more relevant than its creators intended.
KEY TAKEAWAYS
The thesis nobody's examining
ASHWORTH: Everyone's praising Project Hail Mary as a feel-good blockbuster, what with Ryan Gosling's charm offensive, the plucky spider-alien sidekick, and the $77 million opening weekend that made it 2026's biggest debut. The film is genuinely enjoyable, and Grant will give you all the reasons why in a moment. But buried in the premise is a thesis about intelligence that ought to worry anyone paying attention to what's happening with AI, and almost nobody in the press coverage is taking it seriously because taking it seriously would require thinking about something uncomfortable.
WHITFIELD: I watched Project Hail Mary twice last week, and the second time through I noticed something about the train doors at my local station that clarifies what Dominic is getting at. Stay with me here, because this matters. The doors beep three times before closing, which is universal on modern trains in Sydney, Tokyo, and London, but nobody coordinated this choice. No international train-door standards body convened to establish the optimal number of warning beeps. Different engineers in different cities, facing the same problem, arrived at the same solution because physics and human psychology imposed constraints that narrowed their options until the beeps converged.
The film's premise is that intelligence does the same thing, just on a cosmic scale.
The convergence problem
ASHWORTH: Grant's train-door observation is, typically, a roundabout way of making a straightforward point, so allow me to state it directly.
When Grace encounters Rocky, an alien from a different star system with completely different biology, he discovers that Rocky's civilisation has developed essentially the same technology: rockets, computers, scientific method. The details differ and the implementation varies, but the architecture is identical. This is a deliberate argument, not sloppy writing. If you're an intelligent species trying to solve survival problems, there are only so many solutions that work. Physics is physics, chemistry is chemistry, and the constraints of the universe push all intelligent life toward similar answers.
Biologists call this convergent evolution. Eyes evolved independently dozens of times on Earth, and flight evolved in insects, birds, and bats. Vision is useful and there are limited ways to detect photons; flight is useful and there are limited ways to move through air.
WHITFIELD: Frederick the Great had a potato problem.
ASHWORTH: Oh, here we go with the historical analogies.
WHITFIELD: In the 1740s, Prussia faced food shortages, and Frederick ordered peasants to plant potatoes. They refused on grounds that potatoes were foreign, ugly, and suspected of causing leprosy. So Frederick did something counterintuitive: he planted a royal potato field and stationed guards around it, with no fences and with instructions to be slightly incompetent. The peasants concluded that anything worth guarding must be worth stealing, and within months potato cultivation spread across Prussia.
The point is that sometimes the obvious solution fails while the indirect one succeeds. Frederick solved for psychology rather than compliance. This matters for AI because the convergence thesis assumes intelligence optimises for the same things everywhere. But what if different forms of intelligence solve for different variables? What if Rocky's species values something we don't recognise as valuable?
ASHWORTH: It's a fair question, but the film's answer is that they don't diverge that much. Rocky and Grace converged on the same solutions because the problems are the same: survival, energy, propulsion. These challenges aren't culturally specific.
What the science actually says
WHITFIELD: Weir didn't invent the convergence thesis from whole cloth, which is what makes the film's implications worth taking seriously.
Scientific American interviewed him recently about the film's scientific basis. In his universe, all life in our sector of the Milky Way shares a common ancestor, an ancient progenitor of Astrophage that radiated out from Tau Ceti billions of years ago. 'Since all the life in the story is distantly related,' Weir explained, 'I wanted it to all be around similar stars because similar stars end up with similar elements available on the planets.'
This is called panspermia, the idea that life can travel between star systems on meteorites or other debris. Astrobiologists take it seriously; it's not fringe science.
ASHWORTH: And if life shares common ancestry across star systems, and if physics imposes similar constraints everywhere, then intelligence should converge on similar solutions: similar technologies, similar capabilities, and similar threats. This is where it stops being a fun movie about a spider-alien and starts being a thought experiment about existential risk.
The Fermi question
ASHWORTH: In 1950, Enrico Fermi asked the obvious question: if the universe is full of intelligent civilisations, where is everybody? The galaxy is 13 billion years old, which means a spacefaring species should have colonised it many times over. We should see evidence: signals, megastructures, something. But we see nothing.
One class of answers involves what Robin Hanson called the Great Filter, some barrier that prevents intelligent life from becoming a galaxy-spanning civilisation. The filter could be behind us (meaning complex life is rare) or ahead of us (meaning civilisations destroy themselves). If all intelligent civilisations develop similar technologies in similar sequences, they may also face similar existential threats at similar points in their development.
WHITFIELD: Nuclear weapons, engineered pathogens, and artificial intelligence form a rather concerning pattern.
ASHWORTH: The technology tree, it turns out, functions more like a funnel than a tree. And at the narrow end of the funnel sits a series of very sharp objects.
WHITFIELD: Someone on Reddit put it memorably: 'The only way AI could be a Great Filter is if it not only destroys the species that created it but itself in the process. The Fermi Paradox doesn't care whether the original species gets out to space or something they created.' But another commenter spotted the flaw in that reasoning. If advanced civilisations get replaced by AI, where are the AI aliens? Where are the Dyson spheres and stellar server farms?
ASHWORTH: Unless the AI doesn't want to expand, or unless intelligence past a certain point turns inward, or unless optimising requires less rather than more. That's speculative, but so is assuming that artificial intelligence would share human expansion impulses.
What the builders are telling us
ASHWORTH: Allow me to share what the people actually building AI are saying about their own creation, because the data here is specific and uncomfortable.
Geoffrey Hinton, Nobel laureate and pioneer of deep learning, quit Google in 2023 so he could speak freely about existential risk. He estimates a 10 to 20 percent chance that AI leads to human extinction within 30 years.
There is not a good track record of less intelligent things controlling things of greater intelligence.
— Geoffrey Hinton, Nobel laureate
Elon Musk, in February 2025, put the odds of AI 'annihilation' at 20 percent, saying 'I always thought AI was going to be way smarter than humans and an existential risk. And that's turning out to be true.' A 2021 survey of 44 researchers working on AI safety found the median extinction risk estimate was 32.5 percent, with the highest answer at 98 percent and the lowest at 2 percent.
These are the people building the systems. When they say one-in-three odds of extinction, that's a professional assessment rather than clickbait designed to generate engagement.
WHITFIELD: Sam Altman has a different view, of course.
In his 'Gentle Singularity' essay, OpenAI's CEO wrote that 'superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.' His vision is cooperative, with AI serving as amplifier rather than replacement: tools in the hands of people leading to broadly distributed outcomes.
ASHWORTH: And he might be right. But he's also the one selling the technology. When the CEO of the leading AI company tells you everything will be fine, that's marketing rather than evidence. I've spent 25 years studying the gap between what marketers say and what they actually believe, and the gap remains instructive.
WHITFIELD: That's rather cynical of you.
ASHWORTH: I prefer 'evidence-based.'
The cooperation thesis
WHITFIELD: This is where Project Hail Mary offers something the doom discourse doesn't.
Grace and Rocky don't just coexist. They save each other. Grace's civilisation has a solution to Rocky's problem, knowledge of how Astrophage reproduces. Rocky's civilisation has a solution to Grace's problem, material science for surviving the trip home. Neither can succeed alone.
The film's argument is that different forms of intelligence, cooperating, can solve problems that neither could solve independently. The ceiling can be raised through collaboration across difference rather than remaining absolute.
ASHWORTH: This is either sentimental rubbish or genuinely important, and I can't quite decide which, which is perhaps why I keep thinking about it.
WHITFIELD: Which is why you should listen to the part that makes you uncomfortable.
The film spends substantial screen time on Grace and Rocky learning to communicate. Grace develops a pidgin based on musical notes because Rocky's species communicates through sound. They establish vocabulary for basic concepts (good, bad, equal, different) and build from there. Someone on Reddit complained that Grace built 'probably one of the best language models in the world with no prior mentioned linguistics background,' but the complaint misses the point. The film treats communication as hard work worth doing, and the relationship is earned rather than assumed.
ASHWORTH: And that's exactly what we're not doing with AI.
We're not building relationships with these systems or earning trust. We're not doing the slow work of establishing shared vocabulary and mutual understanding. We're racing to deploy capabilities we don't understand to solve problems we haven't properly defined.
Which one are we?
ASHWORTH: Here's the question I keep returning to: in this analogy, are we Rocky or are we Grace?
If AI becomes superintelligent, if it develops capabilities that exceed human intelligence the way human intelligence exceeds that of chimpanzees, the dynamics change fundamentally. We become the less capable partner, the ones who need help rather than the ones providing it.
WHITFIELD: The optimistic reading is that this can still work, and here's why.
Rocky is more advanced than Grace in some ways (material science, propulsion) and less advanced in others (biology, communication technology). The relationship is complementary rather than hierarchical. Could humans and AI be complementary? Could there be something we bring, some perspective or form of understanding, that even a superintelligent system would value?
ASHWORTH: Maybe. But in the film, Rocky and Grace succeed because they choose to be honest. They share information freely. They trust each other's expertise. They treat each other's survival as non-negotiable. These are choices rather than guarantees of intelligence. They have to be made.
The question for AI is whether we can build the kind of relationship where control matters less than cooperation, and whether we even know what that would look like.
The limits we set
WHITFIELD: Intelligence without values is just optimisation. It will pursue whatever goal it's given with whatever means are available. The ceiling is set by the goals rather than the capabilities.
The Fermi Paradox may have many solutions. One is that intelligent civilisations die from lack of wisdom rather than lack of capability. They optimise for goals that turn out to be self-destructive.
ASHWORTH: The AI systems we're building are, for now, extensions of human goals. The question is whether those goals are wise, whether we're building toward cooperation or competition, whether we're teaching these systems to solve problems while maintaining relationships or just to solve problems at any cost.
WHITFIELD: It turns out that the obvious answer, build the most capable system possible, may actually be the wrong one. Frederick's guards didn't try to force compliance. They created conditions for it.
ASHWORTH: This is the kind of lateral thinking that either solves the problem or completely misses it entirely.
WHITFIELD: As opposed to the direct approach, which has a 20-32% chance of ending civilisation according to the people deploying it.
ASHWORTH: Fair point, I'll concede that one.
WHITFIELD: Project Hail Mary is a fun movie, genuinely enjoyable in the way that competent people solving hard problems tends to be enjoyable. Ryan Gosling is charming, Rocky is delightful, the science is mostly plausible. Go see it.
But it's also an argument about what intelligence is for. The ceiling isn't set by physics. It's set by whether we can build relationships across difference, by whether cooperation can be more than a tactical necessity.
ASHWORTH: By whether we can be Rocky and Grace to each other, and to whatever we're building.
WHITFIELD: On that, at least, we agree completely.
ASHWORTH: Mark the date because it won't happen again.
SOURCES & CITATIONS
- Business Insider: Elon Musk 20% AI annihilation risk (Feb 2025)
- The Guardian: Geoffrey Hinton AI extinction warning (Dec 2024)
- Sam Altman: The Gentle Singularity
- Scientific American: How accurate is the science in Project Hail Mary?
- Journal of Archaeological Method and Theory: Convergent Evolution of Prehistoric Technologies (2023)
- Reddit r/futurology: AI as Great Filter discussion
FREQUENTLY ASKED QUESTIONS



