Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
AI

Australian firms pressuring security teams to relax AI identity controls

New data shows 90% of Australian security teams face pressure to loosen access controls, even as AI-generated attacks surge to record levels.

6 min read
A person works alone at a desk in a dim open-plan office, monitors glowing in front of them
Late-night work in a corporate IT department.
Editor
Mar 19, 2026 · 6 min read
By Zara Kincaid · 2026-03-19

Australian security teams are being asked to choose between doing their job and keeping it.

TLDR

Australian organisations are asking security teams to relax identity and access controls to speed up AI deployment, even as cyberattack volumes hit record levels. 90% of security professionals report this pressure, while 70% say they've been targeted by AI-generated attacks in the past year. Machine identities now outnumber human identities 80 to 1, and most remain unmanaged.

KEY TAKEAWAYS

0190% of Australian security teams face pressure from leadership to loosen identity controls to accelerate AI rollout
0270% of Australian organisations hit by AI-generated cyberattacks in the past 12 months (highest globally)
0384% of Australian IT leaders believe nation-state cyberwarfare capabilities could trigger full-scale conflict
04Machine identities outnumber human identities 80:1, with most unknown and uncontrolled
0556% of organisations report monthly shadow AI incidents despite claiming they're keeping pace with governance

That's the finding from three major reports released this month at the RSA Conference 2025, which collectively paint a picture of organisations racing to deploy AI systems while dismantling the access controls meant to protect them.

According to Delinea's 2025 AI in Identity Security Report, 90% of Australian security professionals say they're facing pressure from leadership to loosen identity and access controls. The reason: AI tools need broader permissions to function, and the business wants them deployed now.

The problem is that attackers have AI tools too.

The speed-versus-security trap

The pressure to relax controls is coming from the top. Boards want AI productivity gains. Executives want competitive advantage. Product teams want access to generative models.

Security teams are being told to make it happen faster.

The pressure to move fast is real. But the governance gap is just as real. We're seeing organisations grant AI systems privileged access without the controls they'd demand for a human employee.

— Art Gilliland, CEO, Delinea

The Delinea report found that 40% of Australian security teams are not confident they can govern AI identities. Yet 93% believe they're keeping pace with AI security risks.

The data tells a different story. 56% of organisations report shadow AI incidents every month — unauthorised AI tools being used without IT approval or oversight.

Machine identities are the new attack surface

The identity problem is bigger than just AI chatbots. According to CyberArk's 2025 Machine Identity Security Report, machine identities now outnumber human identities by a ratio of 80 to 1.

These are API keys, service accounts, OAuth tokens, certificates, SSH keys — the credentials that allow software to talk to other software. Most of them are unknown to security teams. Most are unmanaged. Many have privileges equivalent to domain admin.

When an AI agent needs to retrieve customer data, update a database, or call an external API, it uses a machine identity. If that identity is compromised, the AI system becomes an attack vector.

Australian organisations are deploying AI faster than they can inventory the machine identities those systems create. That's a problem when attackers are scanning for exactly those credentials.

AI-generated attacks are already here

The Armis 2025 Cyberwarfare Report found that 70% of Australian organisations were impacted by AI-generated cyberattacks in the past 12 months. That's the highest rate globally.

These attacks use generative AI to write more convincing phishing emails, automate reconnaissance, and generate polymorphic malware that evades signature-based detection.

Cyberwarfare is at a boiling point. AI has lowered the skill floor for attackers while raising the ceiling for what's possible. We're seeing capabilities that were nation-state exclusive 18 months ago now available as open-source tools.

— Nadir Izrael, CTO, Armis

The report also found that 84% of Australian IT leaders believe nation-state cyberwarfare capabilities could trigger a full-scale cyber conflict. 72% have reported cyberwarfare incidents to authorities — the highest rate of any country surveyed.

The governance gap

The central tension is this: AI systems need access to data to be useful. But granting that access creates risk. And the controls that would mitigate that risk slow down deployment.

Organisations are resolving that tension by choosing speed. Security teams are being told to figure it out later.

Delinea's report found that only 35% of organisations have implemented identity governance controls for AI systems. The rest are either planning to do it eventually or haven't started thinking about it.

Meanwhile, shadow AI continues. Employees are using ChatGPT, Claude, Gemini, and dozens of other tools without IT oversight. Some of those tools have access to corporate data. Most organisations don't know which ones.

What needs to change

The solution isn't to stop deploying AI. The solution is to treat AI identities with the same rigour as human identities.

That means discovering and inventorying every machine identity in the environment. It means enforcing least-privilege access — AI systems should only have the permissions they need for their specific function. It means implementing monitoring and anomaly detection so that compromised credentials get flagged before they're used for lateral movement.

It also means having a conversation about acceptable risk. If leadership wants faster AI deployment, they need to understand what controls are being loosened and what that means in the event of a breach.

Australia is critically under-prepared for the cyber threats we're facing. We have world-class offensive capabilities but our defensive posture hasn't kept pace. That gap is showing up in the data.

— Zak Menegazzi, Regional Director ANZ, Armis

The Armis report recommends that organisations conduct tabletop exercises simulating AI-powered attacks. It also recommends implementing zero-trust architectures that assume machine identities can be compromised and limit the blast radius when they are.

The timeline matters

The urgency here is real. AI adoption is accelerating. Attack volumes are accelerating faster.

Organisations that deploy AI without governance controls in place are creating technical debt that compounds daily. Every new AI integration creates more machine identities. Every shadow AI tool creates more unknown exposure.

The organisations that are getting this right are treating AI security as a board-level issue. They're staffing identity and access management teams appropriately. They're investing in tooling that can discover and manage machine identities at scale.

The organisations that aren't will find out what an AI-powered breach looks like. Based on the data from Armis, many already have.

FREQUENTLY ASKED QUESTIONS

What are machine identities?
Machine identities are the credentials used by software and automated systems to authenticate and communicate — API keys, service accounts, OAuth tokens, certificates, and SSH keys. They outnumber human identities by roughly 80 to 1 in most organisations.
Why are organisations pressuring security teams to loosen controls?
AI systems often require broad data access to function effectively. Leadership wants faster deployment to capture productivity gains and competitive advantage, which creates pressure to grant access first and implement governance later.
What is shadow AI?
Shadow AI refers to AI tools and services used by employees without IT approval or oversight. This includes public AI chatbots, browser extensions, and SaaS integrations that may have access to corporate data without security controls.
How can organisations secure AI systems?
Treat AI identities with the same rigour as human identities: discover and inventory all machine identities, enforce least-privilege access, implement monitoring and anomaly detection, and adopt zero-trust architectures that limit blast radius when credentials are compromised.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.