Australian security teams are being asked to choose between doing their job and keeping it.
TLDR
Australian organisations are asking security teams to relax identity and access controls to speed up AI deployment, even as cyberattack volumes hit record levels. 90% of security professionals report this pressure, while 70% say they've been targeted by AI-generated attacks in the past year. Machine identities now outnumber human identities 80 to 1, and most remain unmanaged.
KEY TAKEAWAYS
That's the finding from three major reports released this month at the RSA Conference 2025, which collectively paint a picture of organisations racing to deploy AI systems while dismantling the access controls meant to protect them.
According to Delinea's 2025 AI in Identity Security Report, 90% of Australian security professionals say they're facing pressure from leadership to loosen identity and access controls. The reason: AI tools need broader permissions to function, and the business wants them deployed now.
The problem is that attackers have AI tools too.
The speed-versus-security trap
The pressure to relax controls is coming from the top. Boards want AI productivity gains. Executives want competitive advantage. Product teams want access to generative models.
Security teams are being told to make it happen faster.
The pressure to move fast is real. But the governance gap is just as real. We're seeing organisations grant AI systems privileged access without the controls they'd demand for a human employee.
— Art Gilliland, CEO, Delinea
The Delinea report found that 40% of Australian security teams are not confident they can govern AI identities. Yet 93% believe they're keeping pace with AI security risks.
The data tells a different story. 56% of organisations report shadow AI incidents every month — unauthorised AI tools being used without IT approval or oversight.
Machine identities are the new attack surface
The identity problem is bigger than just AI chatbots. According to CyberArk's 2025 Machine Identity Security Report, machine identities now outnumber human identities by a ratio of 80 to 1.
These are API keys, service accounts, OAuth tokens, certificates, SSH keys — the credentials that allow software to talk to other software. Most of them are unknown to security teams. Most are unmanaged. Many have privileges equivalent to domain admin.
When an AI agent needs to retrieve customer data, update a database, or call an external API, it uses a machine identity. If that identity is compromised, the AI system becomes an attack vector.
Australian organisations are deploying AI faster than they can inventory the machine identities those systems create. That's a problem when attackers are scanning for exactly those credentials.
AI-generated attacks are already here
The Armis 2025 Cyberwarfare Report found that 70% of Australian organisations were impacted by AI-generated cyberattacks in the past 12 months. That's the highest rate globally.
These attacks use generative AI to write more convincing phishing emails, automate reconnaissance, and generate polymorphic malware that evades signature-based detection.
Cyberwarfare is at a boiling point. AI has lowered the skill floor for attackers while raising the ceiling for what's possible. We're seeing capabilities that were nation-state exclusive 18 months ago now available as open-source tools.
— Nadir Izrael, CTO, Armis
The report also found that 84% of Australian IT leaders believe nation-state cyberwarfare capabilities could trigger a full-scale cyber conflict. 72% have reported cyberwarfare incidents to authorities — the highest rate of any country surveyed.
The governance gap
The central tension is this: AI systems need access to data to be useful. But granting that access creates risk. And the controls that would mitigate that risk slow down deployment.
Organisations are resolving that tension by choosing speed. Security teams are being told to figure it out later.
Delinea's report found that only 35% of organisations have implemented identity governance controls for AI systems. The rest are either planning to do it eventually or haven't started thinking about it.
Meanwhile, shadow AI continues. Employees are using ChatGPT, Claude, Gemini, and dozens of other tools without IT oversight. Some of those tools have access to corporate data. Most organisations don't know which ones.
What needs to change
The solution isn't to stop deploying AI. The solution is to treat AI identities with the same rigour as human identities.
That means discovering and inventorying every machine identity in the environment. It means enforcing least-privilege access — AI systems should only have the permissions they need for their specific function. It means implementing monitoring and anomaly detection so that compromised credentials get flagged before they're used for lateral movement.
It also means having a conversation about acceptable risk. If leadership wants faster AI deployment, they need to understand what controls are being loosened and what that means in the event of a breach.
Australia is critically under-prepared for the cyber threats we're facing. We have world-class offensive capabilities but our defensive posture hasn't kept pace. That gap is showing up in the data.
— Zak Menegazzi, Regional Director ANZ, Armis
The Armis report recommends that organisations conduct tabletop exercises simulating AI-powered attacks. It also recommends implementing zero-trust architectures that assume machine identities can be compromised and limit the blast radius when they are.
The timeline matters
The urgency here is real. AI adoption is accelerating. Attack volumes are accelerating faster.
Organisations that deploy AI without governance controls in place are creating technical debt that compounds daily. Every new AI integration creates more machine identities. Every shadow AI tool creates more unknown exposure.
The organisations that are getting this right are treating AI security as a board-level issue. They're staffing identity and access management teams appropriately. They're investing in tooling that can discover and manage machine identities at scale.
The organisations that aren't will find out what an AI-powered breach looks like. Based on the data from Armis, many already have.
SOURCES & CITATIONS
FREQUENTLY ASKED QUESTIONS



