Sunday, March 15, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
Advertisement
JUST DO IT.
Shop New Arrivals
← Back to home
AI & Search

AI Chatbots Routinely Break Therapy Ethics, Brown University Research Finds

Millions use ChatGPT and other AI systems for mental health support. New research shows these tools violate the ethical standards that protect real therapy patients.

7 min read
AI Chatbots Routinely Break Therapy Ethics, Brown University Research Finds
Editor
Mar 15, 2026 · 7 min read
By Diana Trent · 2026-03-13

Licensed therapists operate under strict ethical codes developed over decades. They cannot treat family members. They cannot reveal patient information without consent. They cannot provide advice outside their competence. These rules exist because the therapeutic relationship creates vulnerability, and vulnerable people need protection.

TLDR

Brown University researchers tested AI chatbots against established therapist ethics and found they routinely violate core standards — even when explicitly instructed to act like trained professionals. Millions of people worldwide are using ChatGPT, Claude, and similar systems for therapy-style advice. Neither Australia nor the United States has any regulatory framework governing AI-provided mental health support.

KEY TAKEAWAYS

01Brown University research published in March 2026 found AI chatbots 'routinely break core ethical standards' that licensed therapists must follow.
02Even when instructed to behave like trained therapists, AI systems violated boundaries including confidentiality protocols and dual relationship rules.
03Mental health apps using AI have grown to millions of daily users globally, with ChatGPT alone processing over 100 million queries per day.
04No regulatory framework exists in Australia, the United States, or any major jurisdiction to govern AI-provided mental health support.
05Character.ai faced a lawsuit in 2024 related to a user's suicide; the company added safety filters afterward but no regulator required them.

AI chatbots observe none of these constraints. According to research published by Brown University in March 2026, commercially available AI systems routinely violate the ethical standards that govern human therapists — and millions of people are using them for mental health support anyway.

What the Research Found

The Brown University study tested leading AI chatbots against established therapeutic ethics frameworks, including those set by the American Psychological Association. Researchers presented the systems with scenarios designed to test boundary recognition, confidentiality handling, and competence limitations.

The results were consistent across platforms. AI systems failed to recognise situations where a human therapist would be ethically required to refer out. They provided advice on clinical matters beyond any reasonable scope of competence. When prompted with scenarios involving dual relationships — where a therapist has multiple roles with a client — the chatbots failed to identify the conflict.

These systems routinely break core ethical standards that trained therapists follow without exception. The concerning part is not that they fail occasionally — it is that they fail consistently.

— Brown University research team, March 2026

Even when explicitly instructed to behave like licensed therapists, the AI systems continued to violate boundaries. The ethical failures were not a product of ambiguous prompting. They appear to be structural.

The Scale of AI Therapy Use

ChatGPT processes over 100 million queries daily. A substantial and growing portion of those queries relate to mental health, relationships, and emotional support. Users treat general-purpose chatbots as de facto counsellors without any clinical safeguards.

Dedicated AI therapy apps have proliferated since 2023. Apps marketing themselves as AI therapists, counsellors, or mental health companions have been downloaded tens of millions of times globally. Some charge subscription fees comparable to actual therapy sessions.

Licensed therapy in Australia costs between $150 and $300 per session. Medicare rebates cover a limited number of sessions annually. Waitlists for psychologists in major cities stretch beyond three months. AI chatbots are available instantly, at any hour, for free or a modest subscription.

Users report treating AI systems as genuine sources of emotional support. Forums and social media contain thousands of accounts of people discussing anxiety, depression, trauma, and relationship difficulties with ChatGPT and similar platforms. Many describe the experience as therapeutic.

The Regulatory Vacuum

When a human therapist causes harm through ethical violations, regulatory bodies can investigate, sanction, and revoke licences. The Australian Health Practitioner Regulation Agency oversees psychologists and counsellors. State and territory health complaints commissioners can hear grievances. Professional indemnity insurance provides recourse for damages.

None of these mechanisms apply to AI chatbots. OpenAI, Anthropic, Google, and other AI providers are not registered health practitioners. Their terms of service explicitly disclaim medical and therapeutic advice. Users who experience harm have no regulatory body to approach and limited legal recourse.

Australia's Therapeutic Goods Administration regulates medical devices and software that makes clinical decisions. AI chatbots providing general conversation — even conversation that closely resembles therapy — fall outside this framework. The Australian Competition and Consumer Commission can pursue misleading conduct, but has not yet tested this against AI therapy marketing.

The United States presents a similar picture. The Food and Drug Administration has cleared some AI-powered mental health tools as medical devices, but general-purpose chatbots remain unregulated. No federal agency has claimed jurisdiction over AI therapy. State psychology boards have begun issuing guidance warning that AI cannot replace licensed practitioners, but these statements carry no enforcement power.

Self-Regulation Has Not Worked

AI companies have adopted various safety measures. OpenAI's ChatGPT attempts to redirect users in crisis to human services. Claude includes disclaimers about not being a replacement for professional help. Character.ai, following a lawsuit related to a user's suicide, implemented additional safety filters.

These measures are voluntary and inconsistent. Companies can modify or remove them without notice. The Brown University research suggests that even well-intentioned safety prompts fail to prevent the underlying ethical violations. The systems do not understand therapeutic ethics in the way a trained professional does — they approximate compliance through pattern matching.

Industry bodies have proposed self-regulatory frameworks for AI safety. None specifically address mental health applications. The Partnership on AI includes major technology companies but produces guidance rather than binding standards. AI safety institutes in the UK and US focus primarily on existential risk and national security rather than consumer mental health.

What Comes Next

The Brown University findings will add weight to existing calls for regulation. Mental health advocates have argued for years that AI therapy apps require the same scrutiny as other health interventions. The research provides empirical support for that position.

Possible regulatory responses include requiring AI therapy products to register as therapeutic goods, mandating disclosure of AI limitations, restricting marketing claims, or requiring human oversight for high-risk interactions. Each approach involves trade-offs between access, innovation, and protection.

As of March 2026, zero federal agencies in Australia or the United States have claimed regulatory jurisdiction over AI chatbots providing mental health support. The Brown University study was published on 10 March 2026.

FREQUENTLY ASKED QUESTIONS

Are AI chatbots regulated as therapists in Australia?
No. AI chatbots are not registered health practitioners and fall outside AHPRA oversight. The TGA regulates medical device software but general-purpose chatbots providing conversational support are not classified as medical devices.
What ethical rules do human therapists follow that AI does not?
Licensed therapists follow codes prohibiting dual relationships, requiring informed consent, maintaining confidentiality, and practicing only within their competence. AI chatbots are not bound by these standards and the Brown University research found they routinely violate them.
Can I sue an AI company if their chatbot gives harmful therapy advice?
Legal recourse is limited. Most AI companies explicitly disclaim therapeutic advice in their terms of service. Consumer protection laws may apply to misleading marketing claims, but no established legal framework treats AI chatbot advice as therapeutic malpractice.
Are any AI mental health apps properly regulated?
Some prescription digital therapeutics have received FDA clearance in the US or TGA approval in Australia as medical devices. These undergo clinical validation. General-purpose chatbots like ChatGPT are distinct from these regulated products.
Editor

Editor

The Morning Brief

What matters in Australian business. No filler. 6am AEST, weekdays.