Ai-Generated “Therapists”: Promise, Peril, and What’s Next?

In November 2025, Joe Braidwood, a co-founder of “Yara Ai” chose to shutter his Ai therapy product after concluding it posed unacceptable risks for people with serious mental health issues. This is but the latest chapter in the cautionary tale for the proliferation of Ai therapy.

Mr. Braidwood stated in part: “We stopped Yara because we realized we were building in an impossible space. Ai can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out – someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.”

“The gap between what Ai can safely do and what desperate people need isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

“… the mental health crisis isn’t waiting for us to figure out the perfect solution. People are already turning to Ai for support. They deserve better than what they’re getting from generic chatbots.”

After Mr. Braidwood terminated Yara Ai, to his immense credit he jumped into the next chapter … how to make Ai programs safer. Mr. Braidwood announced the opening of  GLACIS Technologies – their attempt to contribute to the infrastructure of AI safety:

https://www.linkedin.com/pulse/from-heartbreak-infrastructure-why-were-building-glacis-joe-braidwood-uzulc/

Read his words again “… someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.” “… [it] isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

An existential, dangerous problem which startups are not equipped to handle. Consider that reality. And yet, the underlying issue is snowballing at an alarming rate.

This past year, the Harvard Business Review research found the top use of Generative Ai was … “Companionship and Therapy.”

The global Ai in healthcare market is projected to grow rapidly from approximately $37.09 billion in 2025 to over $427 billion by 2032, a compound annual growth rate (CAGR) of over 40%.

In 2025, 22% of healthcare organizations reported having already implemented domain-specific AI tools, a significant increase from just 3% two years prior. A 2024 survey noted that 66% of U.S. physicians were using some form of AI, up from 38% in 2023.

The U.S. Food and Drug Administration (FDA) has authorized over 1200 Ai or machine learning-enabled medical devices to date, indicating increasing regulatory acceptance and the transition of Ai from research to clinical practice.

On October 21, 2025, Menlo ventures released an extensive article on Ai in healthcare.

So, to whom shall we entrust this existential, potentially dangerous issue? Or for that matter does it really matter to whom society “entrusts” the development of this generational, life altering technology? We already know which industry will pioneer the way, developing the technology which will “address” our mental health needs in the future. And their motivation is far from altruistic.

Insurance companies.

Insurance companies are already increasingly investing in Ai driven mental health tools which are “intended” to offer immediate, scalable support.

So why does the insurance industry want Ai programs in the mental health field?

The Case For: Why Insurers Want Ai in Mental Health

1) Access, Speed, and Convenience. In many regions, patients wait weeks for an initial appointment. A 24/7 platform can provide immediate support, especially for low-acuity needs such as stress management, sleep hygiene, and mild-to-moderate anxiety symptoms.

2) Standardization and Protocol Fidelity. Ai systems can deliver structured interventions consistently, reduce clinician “drift” from evidence-based protocols, and prompt ongoing practice of therapeutic skills. For payers, this is attractive because standardization is measurable and scalable.

3) Measurement-Based Care at Scale. Ai can administer screeners, track symptom trends, and support follow through between sessions. When used under clinician governance, this can improve continuity and help identify deterioration earlier.

4) Cost Containment and System Efficiency. The economic case is straightforward: lower-cost interventions for appropriate cases, and potentially fewer downstream costs if early support prevents escalation.

The Case Against: Clinical, Legal, and Ethical Risks

1) Therapy Without Clear Clinical Accountability. When a human clinician provides psychotherapy, licensing and standards of care create identifiable responsibility. [Responsibility which seems to be increasingly overlooked or ignored.] With Ai-only services, accountability becomes diffuse; vendor, insurer, developer, or “the user” which is a poor fit for high-stakes mental health care.

2) Safety in High-Risk Scenarios. Crisis states such as suicidality, self-harm, psychosis, and domestic violence are exactly where failure is most consequential. Ai systems can miss context, misinterpret signals, or provide responses that inadvertently increase risk.

3) Mistriage and Oversimplification. Even good clinicians mistriage. Ai can compound the problem if it lacks nuance around comorbidities, trauma histories, neurodiversity, or cultural context. False reassurance is dangerous; excessive escalation can overwhelm human systems.

4) Privacy and Conflict of Interest. Insurance is structurally sensitive. It sits where health data meets claims management and utilization decisions. If therapy content feeds decision making, or even creates a reasonable fear that it could, patients may self-censor, undermining care.

The “Fortune / Yara” Inflection Point … and the Counter-Lesson

The Yara shutdown, as reported, is primarily cited for a blunt conclusion; that even with guardrails, Ai therapy may be too dangerous for people with serious mental health issues. In today’s derivation of Ai therapy, that is an accurate and alarming concern.

A more practical reading is more nuanced and more actionable: the most defensible lane is Ai augmented care, not Ai-as-therapist … yet. The difference is not semantic, it is operational. If an insurer deploys Ai, safety must be built as a system: constrained scopes, explicit disclosures, continuous monitoring, and fast human escalation that works in real life not just on paper. But safety can be very expensive.

And we know when operational constraints meet financial constraints, history dictates operational constraints will be compromised.

Human Frailties, Ideological Drift, and Why This Can Fuel Ai Adoption

A less discussed but increasingly influential driver of Ai adoption is patient dissatisfaction with human variability … including the perception that some therapists allow personal politics or social ideology to shape the therapeutic relationship. [The “ism” police is prevalent among many therapists.]

While many clinicians practice ethically, a subset of patients report experiences where therapy felt judgmental or moralizing, or where they felt pressured into a social or political framework that did not fit their needs. Even if these experiences are not yet the norm, they can be highly salient: a single negative encounter can permanently reduce willingness to seek traditional care.

As clinicians continue to incorporate radical belief systems like White Supremacy Culture, fatphobia, Indigenous Person’s Land Use Acknowledgements, zero sum game thinking, anti-Semitism, the patriarchy and radical political and social justice views into their everyday lexicon, they lose the ability to listen to their patients, to meet their patients where they are in exchange for ethical, insightful therapeutic regimens where the patient’s needs are prioritized.

This dynamic can and will accelerate Ai adoption in several ways:

  1. Demand for predictable, skills-based support. Many users primarily want coping tools rather than worldview driven interpretation. Ai systems can be positioned as consistent, nonjudgmental, and oriented around concrete skill building. For mild-to-moderate conditions, that positioning will attract patients who want help without interpersonal friction.
  2. Institutional preference for auditability and uniformity. Employers and insurers are sensitive to reputational risk and complaint volume. Ai systems can be constrained, logged, and audited in ways that are difficult with individualized human practice. That makes Ai attractive to institutions seeking standardized delivery, especially for early-stage care pathways. Like insurance companies.
  3. A political paradox: “neutrality” becomes a marketing claim—and a target. Ai is not truly neutral. Training data, safety policies, and vendor tuning encode normative assumptions. Over time, the debate will shift from “therapists inject beliefs” to “platforms embed beliefs.” The perceived advantage of Ai (less idiosyncratic bias) may become a liability if users discover a consistent, system-level bias scaled across millions.
  4. Fragmentation into “values aligned” therapy styles. Some users will prefer “politics-free” skills support; others will want culturally specific or worldview aligned care. Ai platforms can offer configurable styles, but that introduces the risk of “therapeutic filter bubbles,” where systems affirm a user’s worldview rather than challenge maladaptive beliefs when appropriate.

The net effect is that concerns about human bias will inevitably increase appetite for Ai mental-health platforms, but they will also intensify demand for transparency, choice, and oversight. Values will not disappear. Instead, they are moved upstream into product design.

Practical Guardrails for Ethical and Defensible Deployment

In the unlikely event insurance companies seriously embrace issues other than financial viability, if insurers want Ai therapy to be sustainable, guardrails must be more than disclaimers. For example, they must adopt and enforce:

  • Truthful labeling: don’t call it “therapy” if it isn’t clinician-delivered.
  • Disclosure: repeated, clear notice when the user is interacting with Ai.
  • Clinical governance: licensed oversight of protocols, risk signals, and escalation criteria.
  • Real escalation: quick handoffs to humans with operational accountability.
  • Data minimization and segregation: limit retention and wall off therapy content from coverage decisioning.
  • User choice: Ai should be an option, not a prerequisite for human care when clinically indicated.
  • Independent audit: safety, bias, and outcomes evaluation.

Nonetheless, the insurance industry is already using Ai. Its growth and usage will be unprecedented.

Conclusion

Ai mental health platforms can widen access and improve measurement-based care, but they also create nontrivial risks: safety failures, blurred accountability, privacy conflicts, and scaled bias. Air-gapped systems may reduce external security concerns and speed institutional adoption, yet they heighten the need for strict internal governance, because the most important question becomes not only what the Ai says—but what insurers do with what members reveal.

Ultimately, patient experiences with human inconsistency, including perceived ideological drift, will accelerate demand for Ai support. But that same demand will fuel a new expectation: transparency about values embedded in systems, meaningful patient choice, and enforceable protections that keep “care” from becoming merely a more sophisticated form of utilization management.

Ai is here and it is only in its infancy. And we are right to question ultimately whether we will remain the master of Ai … or whether Ai will become our overlords. Sadly, I believe it inevitable that we will approach that point in time when we give the command, “Open the pod bay doors, HAL.” And the chilling reply will be, “I’m sorry, Dave. I’m afraid I can’t do that”.

Sound Advice at Last.

In the past eight (8) years, I have seen various psychiatrists, psychologists, therapists, counselors, shrinks, shamans, witch doctors and a few exorcists. (It takes a special sentient being to understand the many flaws and quirks which exist within me.)

But finally, I located one whose advice was incredibly keen and insightful. It moved me so much that I got permission to record his advice and share it online.

Of course, the advice was centered on me, being a father whose 23 year old daughter died from anorexia after she fought it for many years. We explored the inevitable guilt and depressive feelings that any father would have under these circumstances.

This is the advice given:

https://www.youtube.com/shorts/0Zl4KjRFf5Q

The advice received from the many, past mental health professionals who attempted to meander through my psyche in an attempt to reach me on a deep level, pales in comparison to this advice. This advice was the most insightful, sound, strong and compassionate I received.

And then … things get strange … very strange.

What makes it strange is that the person in the above video is not a person at all … it is actually an Ai generated image. The advice? Word for word came from an Ai program. And not a program specially designed for mental health issues. But a generic ChatGPT program. The image at the start of this article? Ai generated.

Some undoubtedly knew that from the beginning. I am no impressario of Ai generated images. But other people are. People who design and perfect silicone based programs.

These programs are still in their infancy. Imagine what these programs will be like in 2 years … or 5 years … or 10 years.

As a society, we believe that these programs can never have human empathy or life experiences so they will never be as insightful as person-to-person interaction. But that also means these programs will never have issues with countertransference or the incompetence or inherent failings of human beings. Go back and listen to the words being used. This silicone based program used words we associate with compassion, with caring, with concern.

Human generated therapy software programs are here to stay. Generated images improve in depth and quality seemingly every day. Therapy software programs are evolving as they continue to expand and learn.

The question that our mental health professionals need to be asking themselves at this point should not be, “should I be incorporating these programs in my practice in some way …”

But rather … “how am I going to incorporate these programs in my practice?”

The future is here.

Your choice is to embrace it … or be left behind.

DOUBLE EFFECT AND PHYSICIAN ASSISTED SUICIDE

With the approaching legalization of Physician Assisted Suicide (“PAS”) for mental disorders set to take effect in Canada on March 17, 2024, both proponents and opponents are making last ditch efforts to forestall or support implementation.

The statutory law is complex, extensive and awash in legalese.  So, I am embedding a link to this law:

https://www.parl.ca/documentviewer/en/44-1/AMAD/report-2/page-ToC

A Reader’s Digest version of this law, as it pertains to “mental disorders,” and presuming the “mental disorder” does not result in a natural death that is reasonably foreseeable is as follows:

Safeguards for persons whose natural death is not reasonably foreseeable.

The following procedural safeguards apply to persons’ whose natural death is not reasonably foreseeable (*indicates safeguards specific to those requests):

  • request for MAID must be made in writing: a written request must be signed by one independent witness, and it must be made after the person is informed that they have a “grievous and irremediable medical condition” (a paid professional personal or health care worker can be an independent witness);
  • two independent doctors or nurse practitioners must provide an assessment and confirm that all of the eligibility requirements are met;
    • *if neither of the two practitioners who assesses eligibility has expertise in the medical condition that is causing the person’s suffering, they must consult with a practitioner who has such expertise;
  • the person must be informed that they can withdraw their request at any time, in any manner;
  • *the person must be informed of available and appropriate means to relieve their suffering, including counselling services, mental health and disability support services, community services, and palliative care, and must be offered consultations with professionals who provide those services;
  • *the person and the practitioners must have discussed reasonable and available means to relieve the person’s suffering, and agree that the person has seriously considered those means;
  • *the eligibility assessments must take at least 90 days, but this period can be shortened if the person is about to lose the capacity to make health care decisions, as long as both assessments have been completed;
  • immediately before MAID is provided, the practitioner must give the person an opportunity to withdraw their request and ensure that they give express consent.

To provide greater insight, I am embedding testimony taken in May 2022 before the Special Joint Commission on Medical Assistance in Dying:

https://www.parl.ca/DocumentViewer/en/44-1/AMAD/meeting-9/evidence

This site contains much of the evidence and testimony elicited when the Canadian law was being vetted. And of course, there are a number of matters and issues of concern contained within the Report and testimony.

For example, with regard to the crucially important, “Balancing Individual Autonomy and the Protection of the Vulnerable,” the Committee’s findings constituted only four (4), short paragraphs and ended with the following conclusion: “The committee recognizes that a delicate balance must be struck between promoting individual autonomy and protecting against socio-economic vulnerabilities.”

We have an adequate grasp of the painfully obvious. Perhaps the Committee should have focused on the merely obvious conclusion.

Under the Canadian law, the Committee stated, “To be eligible for MAID, a person must have a ‘grievous and irremediable medical condition.’ As Jennifer Chandler explained, “irremediable” is not a medical or scientific term. Rather, as noted above, “grievous and irremediable” is defined in the law as incurability, being in an advanced state of irreversible decline, and “enduring physical or psychological suffering that is intolerable to [the person] and that cannot be relieved under conditions that [the person] consider[s] acceptable.”

Because of this wording, eligibility must meet ALL of these criteria. Further, if we are to use that definition, doesn’t that necessarily exclude all instances of anorexia nervosa? Incurability? Anorexia?

With regard to Minors, the Committee stated, “In Canada, a person must be at least 18 years old to access MAID. However, minors with the requisite capacity are generally entitled to make their own healthcare decisions. The exact parameters of minor consent to healthcare vary by province.” The Committee then held, “The term ‘mature minor’ refers to a common law doctrine according to which “an adolescent’s treatment wishes should be granted a degree of deference that is reflective of his or her evolving maturity.”

Minors. Our teenagers. Our children.  The Committee also found, “In the Netherlands, MAID is allowed for minors aged 12 and over, and may soon be expanded to include younger children. In Belgium, there is no minimum age, so long as the minor has the requisite capacity.”

So, are we to allow young people, our children, whose brain is not biologically developed let alone mature to make life or death decisions? Where is the morality in that?

Principle of Double Effect

Which brings us to the issue of a just society and the morality not only of medical professionals making this life or death call, but whether the very act in question is morally right. To this, we turn to the Principle of Double Effect. (Principle)

The Principle has its historical roots in the medieval natural law tradition, especially in the thought of St. Thomas Aquinas (1225-1274). It has been refined both in its general formulation and in its application by generations of Catholic moral theologians[1].

Although there has been significant disagreement about the precise formulation of this principle, it generally states that, in cases where a contemplated action has both good effects and bad effects, the action is permissible only if it is not wrong in itself and if it does not require that one directly intend the evil result.

Classical formulations of the Principle of Double Effect require that four conditions be met if the action in question is to be morally permissible:

  1. First, that the action contemplated be, in itself either morally good or morally indifferent;
  2. Second, that the bad result not be directly intended;
  3. Third, that the good result not be a direct causal result of the bad result, and;
  4. Fourth, that the good result be “proportionate to” the bad result.

Supporters of the Principle argue that, in situations of “double effect” where all these conditions are met, the action under consideration is morally permissible despite the bad result[2].

The Principle is regularly invoked in ethical discussions about palliative sedation, terminal extubation and other clinical acts that may be viewed as hastening death for imminently dying patients. Unfortunately, the literature tends to employ this useful principle in a fashion suggesting that it offers the final word on the moral acceptability of such medical procedures. In fact, the rule cannot be applied appropriately without invoking moral theories that are not explicit in the rule itself. Four tenets of the rule each require their own ethical justification. For example, the third condition must necessarily invoke the Pauline Principle which states, “One should never do evil so that good may come.” 

Some ethicists believe that if a suffering, terminally ill patient dies because of intentionally receiving pain-relieving medications, it makes a difference whether the death itself was intended or merely anticipated.  If the death was intended it is wrong but if the death was anticipated it might be morally acceptable[3]

Philosophers and medical ethicists have speculated that, “According to this Principle, euthanasia and physician-assisted suicide are always illicit acts, while the same is not said for other actions that bring about patient’s death as a foreseen effect, namely, palliative treatments that hasten death or failure or interruption of life support. The reason for this difference is that, in the first two cases, the patient’s death is intended as a means of pain relief; whereas, in the latter two, death is only a side effect of a medical act, an act justifiable if it is necessary to achieve a proportionate good.”

We also need to question whether the moral objection to an action is the same as the physical performance of that action. Dr. Paulina Taboada addressed this question accordingly, “But the physical performance of an action (actus hominis) does not necessarily coincide with a moral act. Only an action in which human freedom is exercised (actus humanus) can be morally qualified. A moral act is essentially an act in which human freedom is exercised. This means that the moral act itself is marked by an ‘intrinsic intentionality’; it tends towards an object (called moral object).”

Dr. Taboada then stated, “Hence, the moral act cannot be properly characterized by describing a mere physical performance. In order to find out which is the kind of moral act we are performing (i.e., the ‘moral species’ of the act), the key question is: What are you doing? And an answer like “injecting morphine to this patient” would not do it. The proper answer to this question – relieving pain – reveals the ‘intrinsic intentionality’ of the moral act. An analysis of the lived ethical experience shows that the moral character of our free acts is basically determined by this ‘intrinsic intentionality’ of the act, i.e., by the kind (‘species’) of act we perform.”

Dr. Taboada then concluded, “A careful analysis of our most basic human moral experience shows that the ethical character of human acts does not primarily depend on the motivation or intention of the agent, but on the moral species of the action to be performed. Hence, the common saying ‘the end does not justify the means’.[4]

The Canadian law, at best, paid lip service to this incredibly complex issue. An issue which not only touches our existence, but the very heart of our humanity. Faith. The Soul. Life. Death.

Our society seems to be in such a rush to show all others that we are capable of performing a certain act better than it has ever been done … and thus, show our individual wisdom and humanity. However, in doing so, we have lost sight of the question we need to be exploring, that is, “Should we do this act?”

A question that our medical and mental health providers certainly cannot answer. Perhaps there is no answer. And yet, if we do not keep exploring the boundaries of our being, our imagination, our very lives, we will continue to fail. We will fail on a generational level.

We cannot, and do not have the luxury of taking action without seeking wisdom from all interested parties. We must work toward being open to options we never previously considered. We must strive to chart the unknown and unlimited possibilities of existence.

And we can only do that if we take all reasonable and necessary steps to preserve the sanctity of life.


[1] http://sites.saintmarys.edu/~incandel/doubleeffect.html

[2]https://pubmed.ncbi.nlm.nih.gov/3080130/#:~:text=The%20doctrine%20holds%20that%2C%20in,%2C%20and%20(d)%20there%20is

[3] https://medicine.missouri.edu/centers-institutes-labs/health-ethics/faq/euthanasia

[4] https://hospicecare.com/policy-and-ethics/ethical-issues/essays-and-articles-on-ethics-in-palliative-care/shaws-criticism-to-the-double-effect-doctrine/