AI in Mental Health Care: Ethical Terrain We Haven't Mapped


Mental health is different from other healthcare domains when it comes to AI. The same technology that works well in radiology or pathology creates distinct ethical challenges in psychiatric and psychological care.

I’ve been thinking about this a lot lately, as more mental health AI applications come to market. Here are the issues that concern me.

In other healthcare contexts, we assume patients can consent to AI involvement in their care. They might not fully understand neural networks, but they can understand “AI will help your doctor interpret your scan.”

Mental health complicates this. Patients experiencing psychosis, severe depression, or cognitive impairment may have fluctuating or impaired capacity to consent. The very conditions being treated can affect consent capacity.

Questions we haven’t adequately addressed:

  • Should AI be used in care for patients who can’t meaningfully consent to it?
  • How do we handle AI involvement when capacity fluctuates?
  • What additional protections are appropriate for vulnerable mental health patients?

I don’t have answers. But I notice these questions aren’t being asked as mental health AI develops.

The Therapeutic Relationship Problem

Effective mental health care depends on therapeutic relationships. The connection between patient and clinician isn’t just context for treatment—it often is the treatment.

AI intersects with this in concerning ways:

AI chatbots as therapeutic tools. Products offer AI-driven therapy support, available 24/7, without human involvement. Some patients find these helpful. But what happens to the therapeutic relationship when part of “therapy” is with a machine?

AI mediating clinical relationships. When AI provides information to clinicians about their patients—flagging risk factors, suggesting interventions—it shapes the therapeutic relationship. The clinician knows things through AI that they didn’t learn through direct connection.

Trust and disclosure. Will patients disclose as openly knowing AI is analysing their words? Self-censorship could undermine therapeutic processes that depend on honest communication.

These aren’t theoretical concerns. They’re design choices being made now in mental health AI development.

Predictive Risk and Self-Fulfilling Prophecies

AI for mental health often focuses on risk prediction. Predicting suicide risk. Predicting violence risk. Predicting relapse. Predicting deterioration.

There’s obvious value here—earlier intervention saves lives. But prediction creates problems:

Stigmatisation. A patient flagged as high-risk may be treated differently in ways that affect their care and outcomes. The prediction itself becomes a clinical fact that follows the patient.

Self-fulfilling prophecies. If treatment changes based on prediction, we can’t know if the prediction would have been accurate without the change. We’re not predicting the future; we’re shaping it.

Accuracy limitations. Mental health prediction is hard. Even good models have significant false positive rates. A 75% accurate suicide risk model means one in four high-risk flags are wrong. Those patients experience the consequences of being flagged without the predicted event.

Weaponisation potential. Risk scores could be used in adversarial contexts—custody disputes, insurance decisions, employment—in ways that harm patients.

I’m not saying risk prediction shouldn’t be attempted. I’m saying the ethical territory is difficult and we’re moving through it fast.

Data Sensitivity

All health data is sensitive. Mental health data is particularly so.

Mental health diagnoses carry stigma. Disclosure of psychiatric history affects employment, insurance, and relationships. Patients have strong interests in protecting this information.

AI increases data exposure in several ways:

  • More data collection (continuous monitoring, chatbot conversations)
  • More data storage (AI training requires data retention)
  • More potential breach vectors (cloud processing, API connections)
  • More potential secondary uses (research, product improvement)

Current consent frameworks weren’t designed for AI data practices. Patients who consent to treatment don’t necessarily consent to their therapy transcripts training future AI models.

Autonomy and Coercion

Mental health care has unique coercion concerns. Involuntary treatment exists. Patients may face pressure from families, employers, or courts.

AI adds complexity:

Monitoring as condition of treatment. AI-powered monitoring (app usage, activity patterns, communication analysis) could become conditions of treatment, release, or continued community placement. This blurs clinical care and surveillance.

Decreased anonymity. AI could identify concerning mental health patterns in data that patients never intended as health information—social media posts, search histories, consumer behaviour. Where does health data end?

Recommending restriction. If AI recommends involuntary hospitalisation or medication changes, what review processes are appropriate? How do we maintain human judgment in decisions that profoundly affect autonomy?

What Should We Do?

I don’t want to be purely critical. AI has potential to improve mental health care, which is desperately needed given workforce shortages and treatment gaps.

But we need guardrails:

Slow down on high-stakes applications. Risk prediction, therapy chatbots, and automated diagnosis need more evaluation before broad deployment. Lower-stakes applications (documentation support, administrative efficiency) face fewer ethical concerns.

Strengthen consent processes. Mental health AI needs stronger, more specific consent than general health AI. Patients should clearly understand what data is collected, how it’s used, and their right to decline.

Maintain human primacy. AI in mental health should augment human care, not replace it. The therapeutic relationship is too important to automate.

Build in review. Mental health AI decisions affecting patient autonomy should have meaningful human review. Not rubber-stamping—genuine clinical judgment.

Include patient voices. Mental health service users should be involved in AI development and governance. Their perspectives on risks and benefits matter most.

Research in Australian contexts. Mental health AI developed elsewhere may not translate to Australian systems, populations, and legal frameworks. Local validation is essential.

I’m genuinely uncertain about some of these issues. The technology is moving faster than our ethical frameworks. We’re making decisions now that will shape mental health care for decades.

That should make us careful.


Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.