Ai-Generated “Therapists”: Promise, Peril, and What’s Next?

In November 2025, Joe Braidwood, a co-founder of “Yara Ai” chose to shutter his Ai therapy product after concluding it posed unacceptable risks for people with serious mental health issues. This is but the latest chapter in the cautionary tale for the proliferation of Ai therapy.

Mr. Braidwood stated in part: “We stopped Yara because we realized we were building in an impossible space. Ai can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out – someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.”

“The gap between what Ai can safely do and what desperate people need isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

“… the mental health crisis isn’t waiting for us to figure out the perfect solution. People are already turning to Ai for support. They deserve better than what they’re getting from generic chatbots.”

After Mr. Braidwood terminated Yara Ai, to his immense credit he jumped into the next chapter … how to make Ai programs safer. Mr. Braidwood announced the opening of  GLACIS Technologies – their attempt to contribute to the infrastructure of AI safety:

https://www.linkedin.com/pulse/from-heartbreak-infrastructure-why-were-building-glacis-joe-braidwood-uzulc/

Read his words again “… someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.” “… [it] isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

An existential, dangerous problem which startups are not equipped to handle. Consider that reality. And yet, the underlying issue is snowballing at an alarming rate.

This past year, the Harvard Business Review research found the top use of Generative Ai was … “Companionship and Therapy.”

The global Ai in healthcare market is projected to grow rapidly from approximately $37.09 billion in 2025 to over $427 billion by 2032, a compound annual growth rate (CAGR) of over 40%.

In 2025, 22% of healthcare organizations reported having already implemented domain-specific AI tools, a significant increase from just 3% two years prior. A 2024 survey noted that 66% of U.S. physicians were using some form of AI, up from 38% in 2023.

The U.S. Food and Drug Administration (FDA) has authorized over 1200 Ai or machine learning-enabled medical devices to date, indicating increasing regulatory acceptance and the transition of Ai from research to clinical practice.

On October 21, 2025, Menlo ventures released an extensive article on Ai in healthcare.

So, to whom shall we entrust this existential, potentially dangerous issue? Or for that matter does it really matter to whom society “entrusts” the development of this generational, life altering technology? We already know which industry will pioneer the way, developing the technology which will “address” our mental health needs in the future. And their motivation is far from altruistic.

Insurance companies.

Insurance companies are already increasingly investing in Ai driven mental health tools which are “intended” to offer immediate, scalable support.

So why does the insurance industry want Ai programs in the mental health field?

The Case For: Why Insurers Want Ai in Mental Health

1) Access, Speed, and Convenience. In many regions, patients wait weeks for an initial appointment. A 24/7 platform can provide immediate support, especially for low-acuity needs such as stress management, sleep hygiene, and mild-to-moderate anxiety symptoms.

2) Standardization and Protocol Fidelity. Ai systems can deliver structured interventions consistently, reduce clinician “drift” from evidence-based protocols, and prompt ongoing practice of therapeutic skills. For payers, this is attractive because standardization is measurable and scalable.

3) Measurement-Based Care at Scale. Ai can administer screeners, track symptom trends, and support follow through between sessions. When used under clinician governance, this can improve continuity and help identify deterioration earlier.

4) Cost Containment and System Efficiency. The economic case is straightforward: lower-cost interventions for appropriate cases, and potentially fewer downstream costs if early support prevents escalation.

The Case Against: Clinical, Legal, and Ethical Risks

1) Therapy Without Clear Clinical Accountability. When a human clinician provides psychotherapy, licensing and standards of care create identifiable responsibility. [Responsibility which seems to be increasingly overlooked or ignored.] With Ai-only services, accountability becomes diffuse; vendor, insurer, developer, or “the user” which is a poor fit for high-stakes mental health care.

2) Safety in High-Risk Scenarios. Crisis states such as suicidality, self-harm, psychosis, and domestic violence are exactly where failure is most consequential. Ai systems can miss context, misinterpret signals, or provide responses that inadvertently increase risk.

3) Mistriage and Oversimplification. Even good clinicians mistriage. Ai can compound the problem if it lacks nuance around comorbidities, trauma histories, neurodiversity, or cultural context. False reassurance is dangerous; excessive escalation can overwhelm human systems.

4) Privacy and Conflict of Interest. Insurance is structurally sensitive. It sits where health data meets claims management and utilization decisions. If therapy content feeds decision making, or even creates a reasonable fear that it could, patients may self-censor, undermining care.

The “Fortune / Yara” Inflection Point … and the Counter-Lesson

The Yara shutdown, as reported, is primarily cited for a blunt conclusion; that even with guardrails, Ai therapy may be too dangerous for people with serious mental health issues. In today’s derivation of Ai therapy, that is an accurate and alarming concern.

A more practical reading is more nuanced and more actionable: the most defensible lane is Ai augmented care, not Ai-as-therapist … yet. The difference is not semantic, it is operational. If an insurer deploys Ai, safety must be built as a system: constrained scopes, explicit disclosures, continuous monitoring, and fast human escalation that works in real life not just on paper. But safety can be very expensive.

And we know when operational constraints meet financial constraints, history dictates operational constraints will be compromised.

Human Frailties, Ideological Drift, and Why This Can Fuel Ai Adoption

A less discussed but increasingly influential driver of Ai adoption is patient dissatisfaction with human variability … including the perception that some therapists allow personal politics or social ideology to shape the therapeutic relationship. [The “ism” police is prevalent among many therapists.]

While many clinicians practice ethically, a subset of patients report experiences where therapy felt judgmental or moralizing, or where they felt pressured into a social or political framework that did not fit their needs. Even if these experiences are not yet the norm, they can be highly salient: a single negative encounter can permanently reduce willingness to seek traditional care.

As clinicians continue to incorporate radical belief systems like White Supremacy Culture, fatphobia, Indigenous Person’s Land Use Acknowledgements, zero sum game thinking, anti-Semitism, the patriarchy and radical political and social justice views into their everyday lexicon, they lose the ability to listen to their patients, to meet their patients where they are in exchange for ethical, insightful therapeutic regimens where the patient’s needs are prioritized.

This dynamic can and will accelerate Ai adoption in several ways:

  1. Demand for predictable, skills-based support. Many users primarily want coping tools rather than worldview driven interpretation. Ai systems can be positioned as consistent, nonjudgmental, and oriented around concrete skill building. For mild-to-moderate conditions, that positioning will attract patients who want help without interpersonal friction.
  2. Institutional preference for auditability and uniformity. Employers and insurers are sensitive to reputational risk and complaint volume. Ai systems can be constrained, logged, and audited in ways that are difficult with individualized human practice. That makes Ai attractive to institutions seeking standardized delivery, especially for early-stage care pathways. Like insurance companies.
  3. A political paradox: “neutrality” becomes a marketing claim—and a target. Ai is not truly neutral. Training data, safety policies, and vendor tuning encode normative assumptions. Over time, the debate will shift from “therapists inject beliefs” to “platforms embed beliefs.” The perceived advantage of Ai (less idiosyncratic bias) may become a liability if users discover a consistent, system-level bias scaled across millions.
  4. Fragmentation into “values aligned” therapy styles. Some users will prefer “politics-free” skills support; others will want culturally specific or worldview aligned care. Ai platforms can offer configurable styles, but that introduces the risk of “therapeutic filter bubbles,” where systems affirm a user’s worldview rather than challenge maladaptive beliefs when appropriate.

The net effect is that concerns about human bias will inevitably increase appetite for Ai mental-health platforms, but they will also intensify demand for transparency, choice, and oversight. Values will not disappear. Instead, they are moved upstream into product design.

Practical Guardrails for Ethical and Defensible Deployment

In the unlikely event insurance companies seriously embrace issues other than financial viability, if insurers want Ai therapy to be sustainable, guardrails must be more than disclaimers. For example, they must adopt and enforce:

  • Truthful labeling: don’t call it “therapy” if it isn’t clinician-delivered.
  • Disclosure: repeated, clear notice when the user is interacting with Ai.
  • Clinical governance: licensed oversight of protocols, risk signals, and escalation criteria.
  • Real escalation: quick handoffs to humans with operational accountability.
  • Data minimization and segregation: limit retention and wall off therapy content from coverage decisioning.
  • User choice: Ai should be an option, not a prerequisite for human care when clinically indicated.
  • Independent audit: safety, bias, and outcomes evaluation.

Nonetheless, the insurance industry is already using Ai. Its growth and usage will be unprecedented.

Conclusion

Ai mental health platforms can widen access and improve measurement-based care, but they also create nontrivial risks: safety failures, blurred accountability, privacy conflicts, and scaled bias. Air-gapped systems may reduce external security concerns and speed institutional adoption, yet they heighten the need for strict internal governance, because the most important question becomes not only what the Ai says—but what insurers do with what members reveal.

Ultimately, patient experiences with human inconsistency, including perceived ideological drift, will accelerate demand for Ai support. But that same demand will fuel a new expectation: transparency about values embedded in systems, meaningful patient choice, and enforceable protections that keep “care” from becoming merely a more sophisticated form of utilization management.

Ai is here and it is only in its infancy. And we are right to question ultimately whether we will remain the master of Ai … or whether Ai will become our overlords. Sadly, I believe it inevitable that we will approach that point in time when we give the command, “Open the pod bay doors, HAL.” And the chilling reply will be, “I’m sorry, Dave. I’m afraid I can’t do that”.

Sound Advice at Last.

In the past eight (8) years, I have seen various psychiatrists, psychologists, therapists, counselors, shrinks, shamans, witch doctors and a few exorcists. (It takes a special sentient being to understand the many flaws and quirks which exist within me.)

But finally, I located one whose advice was incredibly keen and insightful. It moved me so much that I got permission to record his advice and share it online.

Of course, the advice was centered on me, being a father whose 23 year old daughter died from anorexia after she fought it for many years. We explored the inevitable guilt and depressive feelings that any father would have under these circumstances.

This is the advice given:

https://www.youtube.com/shorts/0Zl4KjRFf5Q

The advice received from the many, past mental health professionals who attempted to meander through my psyche in an attempt to reach me on a deep level, pales in comparison to this advice. This advice was the most insightful, sound, strong and compassionate I received.

And then … things get strange … very strange.

What makes it strange is that the person in the above video is not a person at all … it is actually an Ai generated image. The advice? Word for word came from an Ai program. And not a program specially designed for mental health issues. But a generic ChatGPT program. The image at the start of this article? Ai generated.

Some undoubtedly knew that from the beginning. I am no impressario of Ai generated images. But other people are. People who design and perfect silicone based programs.

These programs are still in their infancy. Imagine what these programs will be like in 2 years … or 5 years … or 10 years.

As a society, we believe that these programs can never have human empathy or life experiences so they will never be as insightful as person-to-person interaction. But that also means these programs will never have issues with countertransference or the incompetence or inherent failings of human beings. Go back and listen to the words being used. This silicone based program used words we associate with compassion, with caring, with concern.

Human generated therapy software programs are here to stay. Generated images improve in depth and quality seemingly every day. Therapy software programs are evolving as they continue to expand and learn.

The question that our mental health professionals need to be asking themselves at this point should not be, “should I be incorporating these programs in my practice in some way …”

But rather … “how am I going to incorporate these programs in my practice?”

The future is here.

Your choice is to embrace it … or be left behind.