top of page

Why AI Puts Therapy Confidentiality at Risk

  • Writer: Rachel Oblak
    Rachel Oblak
  • 7 days ago
  • 4 min read
ai-confidentiality-therapy-privacy.jpg
Photo by ThisIsEngineering

Artificial intelligence is rapidly entering healthcare, and therapy is no exception. From automated progress notes to AI “assistants” that summarize sessions, the appeal is obvious, especially for solo practitioners stretched thin.


But beneath the convenience lies a set of confidentiality risks that are rarely explained clearly to patients or therapists. They strike at the core of what therapy depends on: privacy, trust, and safety.


As both a provider and a patient, I do not consent to the use of AI in therapy—and I refuse to use it in my clinical work. Here’s why.


Imagine This for a Moment


If you began therapy and your provider told you that, at unpredictable times, strangers might listen to or record your sessions and that those recordings could later be used in ways neither of you could foresee, you would be understandably alarmed.


Yet this is essentially what happens when AI is introduced into therapeutic spaces, often buried inside vague consent forms that mention “administrative support” or “note‑taking assistance.” Patients are rarely told what that actually means, who has access, or how long their data is retained.


Risk #1: Humans Are Always on the Backend


AI does not operate in isolation.


While AI systems are often described as automated or self‑learning, humans are deeply involved behind the scenes—training models, reviewing outputs, correcting errors, and improving performance. This process frequently involves exposure to real user data, including highly sensitive material.


When AI is used in therapy, that can mean your private session content becomes part of a training or quality‑control pipeline. In practical terms, therapy conducted with AI support is less like a private room and more like a space with a one‑way mirror: you don’t know who might be watching, when, or for what purpose.


This level of access is far beyond what patients typically understand when they consent to AI use.


Risk #2: Big Tech’s Goals Conflict With Therapy’s Goals


In therapy, confidentiality exists to protect patient wellbeing. Records are kept defensively, shared minimally, and governed by ethical obligations.


Technology companies operate under a very different incentive structure. Their primary responsibility is to generate profit—often through data collection, analytics, and targeted advertising. Even when companies promise strong privacy protections, history shows that those promises can and do break down under financial pressure.


The mental health space has already seen multiple high‑profile cases in which sensitive psychological data was shared with advertisers or embedded tracking tools without meaningful patient consent. Notable cases include FTC fines against BetterHelp, mass data transmissions by Cerebral, a large settlement involving Kaiser Permanente, and allegations against apps like Calm. These incidents illustrate a widespread problem where personal mental-health information is monetized and/or sold, often without user consent. These were not accidental leaks; they were business decisions.


When AI systems hold therapy data, they—not clinicians—become the record keepers. That shift alone should give pause.


Risk #3: Confidentiality Is Not Legally Guaranteed


Even in a best‑case scenario—where a company acts ethically, limits access, and refuses to sell data—confidentiality still isn’t assured.


Courts are increasingly treating AI prompts, chat logs, and AI‑generated documents as standard discoverable records. That means they can be subpoenaed, preserved indefinitely, and produced in civil or criminal proceedings, even when users believed the data was private or deleted.


Recent rulings have shown that AI records are not reliably protected by confidentiality privileges that traditionally safeguard therapy or legal communications. In some cases, courts have ordered companies to retain all AI interaction logs, including deleted conversations, specifically so they can be accessed later if needed.


When AI is used to record or summarize therapy sessions, often by capturing the entire conversation in the background, it creates a permanent, external record that may be vulnerable to court‑ordered access.


This is not a distant possibility. It is already happening in other contexts.


Why the Risks of AI in Therapy Matter for Patients and Providers


Therapy depends on the ability to speak freely without fear of future exposure. When confidentiality becomes conditional, monitored, or legally uncertain, it changes what people are willing to share—and that directly impacts care.


Patients cannot give informed consent if they are not told:

·        who may access their data,

·        how long it is retained,

·        whether it can be sold, reviewed, or subpoenaed,

·        and whether legal protections actually apply.


Providers cannot control or guarantee any of the above, forcing patients to take on faith the goodness of a large, nameless corporation. This isn't just a concern, it's an ethical issue. Providers should not outsource core clinical functions to systems that undermine the very privacy standards they are obligated to uphold.


A Clear Line in the Sand


AI may have a place in healthcare. But therapy is not just another data stream to optimize.

Until confidentiality protections are explicit, enforceable, and equivalent to—or stronger than—existing therapeutic standards, AI does not belong in the therapy room. Not as a replacement for human care, and not as a silent observer in the background.

 

 

 

 

bottom of page