Dario Amodei flew to Canberra on Wednesday to sign a Memorandum of Understanding with Prime Minister Anthony Albanese. The deal covers AI safety research, economic data sharing, and workforce tracking across four sectors.
TLDR
Anthropic signed a Memorandum of Understanding with the Australian government covering AI safety research, economic impact tracking, and $3 million in research grants to four universities. The deal is the first under Australia's National AI Plan. Anthropic will share its Economic Index data with Canberra to track AI adoption across mining, agriculture, healthcare, and financial services.
KEY TAKEAWAYS
Tim Ayres, the Minister for Industry, described the MOU as non-legally binding and the first arrangement under Australia's National AI Plan, launched in late 2025. Industry, Science and Resources said it sets "expectations aligned with Australia's national interest" rather than enforceable obligations.
What the agreement covers
Dario Amodei said Anthropic will share data from its Economic Index with Canberra. The index tracks how Claude is used across sectors, measuring adoption, task complexity, and implications for workers. Mining, agriculture, healthcare, and financial services come first.
"Australia's investment in AI safety makes it a natural partner for responsible AI development," Amodei told reporters. "This MOU gives our collaboration a formal foundation."
Australia's AI Safety Institute will run joint evaluations with Anthropic on emerging model capabilities and risks. The United States, the United Kingdom, and Japan already have equivalent arrangements with the company.
Anthropic committed AUD$3 million in API credits to four institutions: the Australian National University, the Murdoch Children's Research Institute, the Garvan Institute of Medical Research, and Curtin University. The grants fund clinical genomics, paediatric research, rare disease diagnosis, and computing education.
Dario Amodei also confirmed the company is exploring data centre and energy investment in Australia, aligned with Canberra's data centre expectations framework published on 23 March 2026.
Economic tracking is the centrepiece
Australia becomes the first government to receive Anthropic's Economic Index data under a formal agreement. The index provides granular detail on which tasks Claude automates and which it augments, broken down by occupation and industry.
Anthropic's own research shows Australians use Claude for a broader range of tasks than users in any other English-speaking country. Usage spans management, sales, business operations, and life sciences.
Tim Ayres said the MOU "sends a clear signal to Australians that we are open for business, where investment aligns with Australia's priorities and Australian values." Andrew Charlton told reporters the government was "committed to ensuring AI works for the Australian people, and not the other way around."
What the MOU does not do
Anthropic faces no obligation to submit models for pre-deployment review, disclose training data, or meet specific safety standards beyond what it voluntarily shares.
Canberra relies on existing laws and voluntary guidelines rather than new legislation under the National AI Plan. The department said it is "open to exploring similar arrangements with other leading AI and technology companies."
The United States, the United Kingdom, and Japan have each signed voluntary agreements with frontier AI companies rather than passing prescriptive legislation. Whether voluntary frameworks produce meaningful safety outcomes remains contested. Self-regulation in financial services and social media offers mixed evidence at best.
Research details
Australian National University researchers at the John Curtin School of Medical Research are using Claude to analyse genetic sequencing data for rare diseases. Curtin University and ANU's School of Computing will embed Claude into courses training the next generation of developers.
Garvan Institute teams will use Claude to accelerate genomic discovery in two projects: one with UNSW translating genetic variation into treatment insights, and another with the Centre for Population Genomics automating clinical interpretation of variants.
Anthropic provided the $3 million as API credits, not cash. Credits commit the company to compute access at marginal inference cost rather than face value.
Context
Anthropic accidentally published 500,000 lines of Claude Code source code in an npm package on March 31. Fortune reported the leak contained roughly 1,900 files. Signing a safety partnership while managing an operational security failure sharpens scrutiny on whether voluntary commitments translate into practice.
Anthropic confirmed earlier this year it will open a Sydney office in 2026, its fourth in the Asia-Pacific after Tokyo, Bengaluru, and Seoul. Australia ranks among the top countries globally for per-capita Claude usage.
The full MOU text is published on the Department of Industry, Science and Resources website.
SOURCES & CITATIONS
- Memorandum of Understanding between the Australian Government and Anthropic on collaboration on AI opportunities, Department of Industry Science and Resources, 1 April 2026
- Australian government and Anthropic sign MOU for AI safety and research, Anthropic, 1 April 2026
- New agreement on AI collaboration with Anthropic, Minister for Industry and Innovation Tim Ayres, 1 April 2026
- Expectations for data centres and AI infrastructure developers, Department of Industry Science and Resources, 23 March 2026
- How Australia uses Claude, Anthropic Economic Index, March 2026
FREQUENTLY ASKED QUESTIONS



