
In November 2025, Joe Braidwood, a co-founder of “Yara Ai” chose to shutter his Ai therapy product after concluding it posed unacceptable risks for people with serious mental health issues. This is but the latest chapter in the cautionary tale for the proliferation of Ai therapy.
Mr. Braidwood stated in part: “We stopped Yara because we realized we were building in an impossible space. Ai can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out – someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.”
“The gap between what Ai can safely do and what desperate people need isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”
“… the mental health crisis isn’t waiting for us to figure out the perfect solution. People are already turning to Ai for support. They deserve better than what they’re getting from generic chatbots.”
After Mr. Braidwood terminated Yara Ai, to his immense credit he jumped into the next chapter … how to make Ai programs safer. Mr. Braidwood announced the opening of GLACIS Technologies – their attempt to contribute to the infrastructure of AI safety:
Read his words again “… someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.” “… [it] isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”
An existential, dangerous problem which startups are not equipped to handle. Consider that reality. And yet, the underlying issue is snowballing at an alarming rate.
This past year, the Harvard Business Review research found the top use of Generative Ai was … “Companionship and Therapy.”
The global Ai in healthcare market is projected to grow rapidly from approximately $37.09 billion in 2025 to over $427 billion by 2032, a compound annual growth rate (CAGR) of over 40%.
In 2025, 22% of healthcare organizations reported having already implemented domain-specific AI tools, a significant increase from just 3% two years prior. A 2024 survey noted that 66% of U.S. physicians were using some form of AI, up from 38% in 2023.
The U.S. Food and Drug Administration (FDA) has authorized over 1200 Ai or machine learning-enabled medical devices to date, indicating increasing regulatory acceptance and the transition of Ai from research to clinical practice.
On October 21, 2025, Menlo ventures released an extensive article on Ai in healthcare.
So, to whom shall we entrust this existential, potentially dangerous issue? Or for that matter does it really matter to whom society “entrusts” the development of this generational, life altering technology? We already know which industry will pioneer the way, developing the technology which will “address” our mental health needs in the future. And their motivation is far from altruistic.
Insurance companies.
Insurance companies are already increasingly investing in Ai driven mental health tools which are “intended” to offer immediate, scalable support.
So why does the insurance industry want Ai programs in the mental health field?
The Case For: Why Insurers Want Ai in Mental Health

1) Access, Speed, and Convenience. In many regions, patients wait weeks for an initial appointment. A 24/7 platform can provide immediate support, especially for low-acuity needs such as stress management, sleep hygiene, and mild-to-moderate anxiety symptoms.
2) Standardization and Protocol Fidelity. Ai systems can deliver structured interventions consistently, reduce clinician “drift” from evidence-based protocols, and prompt ongoing practice of therapeutic skills. For payers, this is attractive because standardization is measurable and scalable.
3) Measurement-Based Care at Scale. Ai can administer screeners, track symptom trends, and support follow through between sessions. When used under clinician governance, this can improve continuity and help identify deterioration earlier.
4) Cost Containment and System Efficiency. The economic case is straightforward: lower-cost interventions for appropriate cases, and potentially fewer downstream costs if early support prevents escalation.
The Case Against: Clinical, Legal, and Ethical Risks
1) Therapy Without Clear Clinical Accountability. When a human clinician provides psychotherapy, licensing and standards of care create identifiable responsibility. [Responsibility which seems to be increasingly overlooked or ignored.] With Ai-only services, accountability becomes diffuse; vendor, insurer, developer, or “the user” which is a poor fit for high-stakes mental health care.
2) Safety in High-Risk Scenarios. Crisis states such as suicidality, self-harm, psychosis, and domestic violence are exactly where failure is most consequential. Ai systems can miss context, misinterpret signals, or provide responses that inadvertently increase risk.
3) Mistriage and Oversimplification. Even good clinicians mistriage. Ai can compound the problem if it lacks nuance around comorbidities, trauma histories, neurodiversity, or cultural context. False reassurance is dangerous; excessive escalation can overwhelm human systems.
4) Privacy and Conflict of Interest. Insurance is structurally sensitive. It sits where health data meets claims management and utilization decisions. If therapy content feeds decision making, or even creates a reasonable fear that it could, patients may self-censor, undermining care.
The “Fortune / Yara” Inflection Point … and the Counter-Lesson
The Yara shutdown, as reported, is primarily cited for a blunt conclusion; that even with guardrails, Ai therapy may be too dangerous for people with serious mental health issues. In today’s derivation of Ai therapy, that is an accurate and alarming concern.
A more practical reading is more nuanced and more actionable: the most defensible lane is Ai augmented care, not Ai-as-therapist … yet. The difference is not semantic, it is operational. If an insurer deploys Ai, safety must be built as a system: constrained scopes, explicit disclosures, continuous monitoring, and fast human escalation that works in real life not just on paper. But safety can be very expensive.
And we know when operational constraints meet financial constraints, history dictates operational constraints will be compromised.
Human Frailties, Ideological Drift, and Why This Can Fuel Ai Adoption

A less discussed but increasingly influential driver of Ai adoption is patient dissatisfaction with human variability … including the perception that some therapists allow personal politics or social ideology to shape the therapeutic relationship. [The “ism” police is prevalent among many therapists.]
While many clinicians practice ethically, a subset of patients report experiences where therapy felt judgmental or moralizing, or where they felt pressured into a social or political framework that did not fit their needs. Even if these experiences are not yet the norm, they can be highly salient: a single negative encounter can permanently reduce willingness to seek traditional care.
As clinicians continue to incorporate radical belief systems like White Supremacy Culture, fatphobia, Indigenous Person’s Land Use Acknowledgements, zero sum game thinking, anti-Semitism, the patriarchy and radical political and social justice views into their everyday lexicon, they lose the ability to listen to their patients, to meet their patients where they are in exchange for ethical, insightful therapeutic regimens where the patient’s needs are prioritized.
This dynamic can and will accelerate Ai adoption in several ways:
- Demand for predictable, skills-based support. Many users primarily want coping tools rather than worldview driven interpretation. Ai systems can be positioned as consistent, nonjudgmental, and oriented around concrete skill building. For mild-to-moderate conditions, that positioning will attract patients who want help without interpersonal friction.
- Institutional preference for auditability and uniformity. Employers and insurers are sensitive to reputational risk and complaint volume. Ai systems can be constrained, logged, and audited in ways that are difficult with individualized human practice. That makes Ai attractive to institutions seeking standardized delivery, especially for early-stage care pathways. Like insurance companies.
- A political paradox: “neutrality” becomes a marketing claim—and a target. Ai is not truly neutral. Training data, safety policies, and vendor tuning encode normative assumptions. Over time, the debate will shift from “therapists inject beliefs” to “platforms embed beliefs.” The perceived advantage of Ai (less idiosyncratic bias) may become a liability if users discover a consistent, system-level bias scaled across millions.
- Fragmentation into “values aligned” therapy styles. Some users will prefer “politics-free” skills support; others will want culturally specific or worldview aligned care. Ai platforms can offer configurable styles, but that introduces the risk of “therapeutic filter bubbles,” where systems affirm a user’s worldview rather than challenge maladaptive beliefs when appropriate.
The net effect is that concerns about human bias will inevitably increase appetite for Ai mental-health platforms, but they will also intensify demand for transparency, choice, and oversight. Values will not disappear. Instead, they are moved upstream into product design.
Practical Guardrails for Ethical and Defensible Deployment
In the unlikely event insurance companies seriously embrace issues other than financial viability, if insurers want Ai therapy to be sustainable, guardrails must be more than disclaimers. For example, they must adopt and enforce:
- Truthful labeling: don’t call it “therapy” if it isn’t clinician-delivered.
- Disclosure: repeated, clear notice when the user is interacting with Ai.
- Clinical governance: licensed oversight of protocols, risk signals, and escalation criteria.
- Real escalation: quick handoffs to humans with operational accountability.
- Data minimization and segregation: limit retention and wall off therapy content from coverage decisioning.
- User choice: Ai should be an option, not a prerequisite for human care when clinically indicated.
- Independent audit: safety, bias, and outcomes evaluation.
Nonetheless, the insurance industry is already using Ai. Its growth and usage will be unprecedented.
Conclusion
Ai mental health platforms can widen access and improve measurement-based care, but they also create nontrivial risks: safety failures, blurred accountability, privacy conflicts, and scaled bias. Air-gapped systems may reduce external security concerns and speed institutional adoption, yet they heighten the need for strict internal governance, because the most important question becomes not only what the Ai says—but what insurers do with what members reveal.
Ultimately, patient experiences with human inconsistency, including perceived ideological drift, will accelerate demand for Ai support. But that same demand will fuel a new expectation: transparency about values embedded in systems, meaningful patient choice, and enforceable protections that keep “care” from becoming merely a more sophisticated form of utilization management.
Ai is here and it is only in its infancy. And we are right to question ultimately whether we will remain the master of Ai … or whether Ai will become our overlords. Sadly, I believe it inevitable that we will approach that point in time when we give the command, “Open the pod bay doors, HAL.” And the chilling reply will be, “I’m sorry, Dave. I’m afraid I can’t do that”.
