The March 25, 2026 jury verdict in Los Angeles against Meta and Google, paired with the $375 million New Mexico verdict against the same companies the day before, mark a structural shift in how courts conceptualize harm arising from social media platforms. These cases do not merely expand liability. They reframe the legal ontology of digital platforms from neutral intermediaries into potentially defective consumer products.

In this article, we will explore what this means for the way in which we look upon eating disorders … and what therapists and clinicians should know.

For eating disorders, conditions already deeply entangled with algorithmic amplification, body image distortion, and compulsive engagement, the shift in liability for digital platforms is particularly consequential. The emerging litigation theory may provide for the first time a coherent legal pathway to attribute causation and duty in eating disorder related harm.

The recent Meta/Google verdicts succeeded because plaintiffs changed the theory of liability. The old framing was, “You allowed harmful content to exist.” Federal statutes provided immunity for this reasoning. Case dismissed.

The new framing is now, “You designed a system that predictably causes harm.” This is the doctrinal pivot. The plaintiffs were able to bring forth evidence that the platforms knew about harm (e.g., to teens, body image, ED risk) but continued optimizing engagement anyway. This evidence supports claims of negligence, recklessness and malice. This also strengthens the argument that the wrongdoing lies in corporate decision making not user content.

Why This Matters Specifically for Eating Disorders

Eating disorder harm fits the “Design, Not Content” model argued in courtrooms. Eating disorders are not typically triggered by a single post. But by repeated exposure, escalating comparison and behavioral reinforcement. These are clearly algorithmic phenomena.

Unlike traditional media, social media platforms can identify users engaging with dieting and body comparison content. This increases the likelihood of exposure. This frames a plaintiff’s argument that harm is not incidental. It is systematically intensified. There is also substantial evidence that social comparison leads to body dissatisfaction and repeated exposure leads to disordered eating behaviors

This makes it easier to argue that harm was predictable, foreseeable and safer alternatives were available but disregarded.

The recent verdicts are also significant not because they establish a medical causation of eating disorders, but because they elevate platform design and algorithmic exposure into the realm of foreseeable mental health risk.

In effect, the verdicts reinforce three propositions that are directly relevant to clinical practice:

  1. Digital environments can function as risk-amplifying exposures, particularly for adolescents;
  2. Algorithmic curation is not neutral, but can intensify engagement with appearance focused or psychologically harmful content; and
  3. Harm need not arise solely from user intent but may be driven by product design features.

From a standard-of-care perspective, these propositions are likely to influence what constitutes “reasonable” clinical conduct.

Even in the absence of formalized guidelines, foreseeability plays a central role in negligence analysis. As juries begin to recognize social media design as a source of mental health harm, clinicians may be expected to:

  • Screen for social media use with greater specificity (not merely duration, but type of content and engagement patterns);
  • Identify platform-related triggers (e.g., comparison behaviors, exposure to body-ideal content, reinforcement loops);
  • Incorporate digital environment management into treatment planning; and
  • Provide anticipatory guidance to patients and families regarding online risk factors.

Failure to do so over time may be framed as a deviation from evolving professional norms even in the absence of codified standards.

Evolution of Standard of Care Through Litigation Rather Than Consensus

In fields lacking clear clinical standards, the standard of care often evolves through case law, expert testimony, and institutional practice patterns.

The Meta and Google verdicts may accelerate this process by:

  • Providing a judicially recognized framework for linking platform design to mental health harm;
  • Encouraging plaintiffs to incorporate digital exposure into causation narratives; and
  • Pressuring professional organizations to issue more explicit guidance in response.

In this sense, the verdicts may function as de facto catalysts for standard formation even if formal consensus lags behind.

Clinicians and treatment programs that proactively integrate digital-risk assessment may therefore position themselves more favorably relative to an emerging baseline of care.

Implications for Causation Frameworks in Eating Disorders

Historically, eating disorders have been understood through a multifactorial model, incorporating genetic predisposition, temperamental traits, family dynamics, trauma and sociocultural influences. The recent verdicts do not displace this model. However, they may recalibrate the weight assigned to environmental and systemic contributors, particularly those mediated through technology. Importantly, this shift may influence not only clinical practice, but also the narrative frameworks used in litigation and public discourse.

Anticipated Expansion of Social Justice and Structural Etiology Arguments

One of the more complex implications of these developments is the possible expansion of social justice based etiological frameworks, including arguments that locate eating disorders within broader systems of oppression.

Within certain academic and advocacy contexts, eating disorders have increasingly been linked to:

  • Eurocentric beauty standards,
  • Fatphobia,
  • Structural inequities in healthcare access, and
  • Cultural norms associated with what has been termed “White supremacy culture” (e.g., perfectionism, control, individualism).

The Meta and Google verdicts may indirectly reinforce these perspectives in several ways:

1. Externalization of Harm

By attributing liability to platform design rather than solely to individual behavior, the verdicts support a broader shift toward externalizing causation. This aligns with social justice frameworks that emphasize systemic over individual factors.

2. Validation of Environmental Influence

The recognition of algorithmic amplification as harmful lends credibility to arguments that cultural and media environments actively shape pathology, rather than merely reflecting it.

3. Expansion of Duty Beyond the Individual

If platforms can be held liable for contributing to mental health harm, analogous arguments may be advanced that cultural systems, institutional practices, and dominant norms also bear some responsibility for shaping risk.

As a result, Plaintiffs may increasingly incorporate cultural and systemic critiques, expert testimony on media ecology and sociocultural pressure, and arguments linking platform content to broader ideological frameworks as part of causation narratives.

Tension Between Clinical Rigor and Expanding Etiological Narratives

While these developments may broaden the scope of inquiry, they also introduce tension. From a clinical and evidentiary standpoint multifactorial models require specificity and measurable variables and overly diffuse causation theories risk diluting analytical precision.

From a legal standpoint courts require evidence that is not only plausible, but attributable and proximate. Expansive social frameworks (e.g., “White supremacy culture”) may be more difficult to operationalize in a manner that satisfies evidentiary standards. Accordingly, while social justice perspectives may gain rhetorical and academic traction, their translation into clinical standards or legal causation will likely depend on the development of measurable constructs, empirical validation, and clear linkage to individual harm.

Increased Eating Disorder Liability

For eating disorder related claims, liability may no longer depend on identifying specific harmful posts.  Instead, plaintiffs can target recommendation algorithms, engagement loops (likes, scroll, autoplay) and behavioral reinforcement systems. This aligns directly with how eating disorder pathology operates; repetition, reinforcement, and escalation, not isolated exposure.

Historically, eating disorder related litigation struggled with causation; eating disorders are multifactorial (genetics, trauma, culture) and Courts viewed platform influence as too attenuated.

The recent verdicts suggest juries are now willing to accept alternatives. The Los Angeles case framed harm through addiction mechanics; compulsive use, reinforcement loops and diminished control. This maps closely onto eating disorder pathology; compulsive restriction, bingeing, or purging, reinforcement through comparison and validation and escalating behavioral cycles.

Unlike traditional media, social media platforms learn user vulnerabilities and optimize content delivery accordingly. For eating disorder claims, this enables arguments that platforms did not merely expose users to harmful content. They systematically increased exposure based on detected susceptibility.

This is a qualitatively different form of causation, not passive distribution, but active behavioral shaping.

Among potential harm categories, EDs are uniquely positioned for litigation success due to a high predictability of harm. There is extensive internal and external research linking social comparison to body dissatisfaction to disordered eating. We now know that social media platforms can track repeated viewing of weight loss content, thinspiration and calorie restriction narratives. This creates a potential evidentiary record of foreseeable harm combined with continued amplification.

Courts are especially receptive to harms affecting minors and failure to implement protective measures. Eating disorder onset often occurs during adolescence, aligning directly with peak social media usage and peak psychological vulnerability.

Long-Term Structural Changes

As a result of these cases, we may see an emergence of “Digital Duty of Care” particularly for minors. Social media platforms may be held to standards similar to product safety law and pharmaceutical risk disclosure.  Courts may formalize liability tied to predictive amplification of harm. And we may see potential legislation impacting youth specific design standards, limits on engagement optimization and/or mandatory transparency for algorithmic systems.

We may also see evolving clinical implications for eating disorders. Eating disorders may increasingly be viewed not only as psychiatric conditions but environmentally induced or exacerbated disorders linked to platform design.

Clinicians should begin to document social media exposure patterns and incorporate platform use into diagnostic frameworks. This could strengthen litigation evidence and insurance coverage arguments.

In addition, eating disorders may be reframed as partially technology-mediated disorders. This parallels lung cancer (tobacco) and opioid addiction (pharmaceutical design and distribution).

The Meta and Google verdicts do not merely increase litigation risk, they signal a paradigm shift in how harm from digital systems is understood and adjudicated. For eating disorders, the implications are profound:

  • A viable legal theory now exists
  • Causation barriers are weakening
  • Platform design is becoming justiciable
  • Large-scale settlement frameworks are increasingly likely

Most importantly, these developments may redefine eating disorders not only as clinical phenomena, but as foreseeable outcomes of engineered environments optimized for engagement at the expense of psychological safety.

If this trajectory holds, the next phase of litigation will not ask whether platforms contributed to eating disorders, but to what extent, and at what cost.

Ai-Generated “Therapists”: Promise, Peril, and What’s Next?

In November 2025, Joe Braidwood, a co-founder of “Yara Ai” chose to shutter his Ai therapy product after concluding it posed unacceptable risks for people with serious mental health issues. This is but the latest chapter in the cautionary tale for the proliferation of Ai therapy.

Mr. Braidwood stated in part: “We stopped Yara because we realized we were building in an impossible space. Ai can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation. But the moment someone truly vulnerable reaches out – someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.”

“The gap between what Ai can safely do and what desperate people need isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

“… the mental health crisis isn’t waiting for us to figure out the perfect solution. People are already turning to Ai for support. They deserve better than what they’re getting from generic chatbots.”

After Mr. Braidwood terminated Yara Ai, to his immense credit he jumped into the next chapter … how to make Ai programs safer. Mr. Braidwood announced the opening of  GLACIS Technologies – their attempt to contribute to the infrastructure of AI safety:

https://www.linkedin.com/pulse/from-heartbreak-infrastructure-why-were-building-glacis-joe-braidwood-uzulc/

Read his words again “… someone in crisis, someone with deep trauma, someone contemplating ending their life – Ai becomes dangerous. Not just inadequate. Dangerous.” “… [it] isn’t just a technical problem. It’s an existential one. And startups, facing mounting regulations and unlimited liability, aren’t the right vehicles to bridge it.”

An existential, dangerous problem which startups are not equipped to handle. Consider that reality. And yet, the underlying issue is snowballing at an alarming rate.

This past year, the Harvard Business Review research found the top use of Generative Ai was … “Companionship and Therapy.”

The global Ai in healthcare market is projected to grow rapidly from approximately $37.09 billion in 2025 to over $427 billion by 2032, a compound annual growth rate (CAGR) of over 40%.

In 2025, 22% of healthcare organizations reported having already implemented domain-specific AI tools, a significant increase from just 3% two years prior. A 2024 survey noted that 66% of U.S. physicians were using some form of AI, up from 38% in 2023.

The U.S. Food and Drug Administration (FDA) has authorized over 1200 Ai or machine learning-enabled medical devices to date, indicating increasing regulatory acceptance and the transition of Ai from research to clinical practice.

On October 21, 2025, Menlo ventures released an extensive article on Ai in healthcare.

So, to whom shall we entrust this existential, potentially dangerous issue? Or for that matter does it really matter to whom society “entrusts” the development of this generational, life altering technology? We already know which industry will pioneer the way, developing the technology which will “address” our mental health needs in the future. And their motivation is far from altruistic.

Insurance companies.

Insurance companies are already increasingly investing in Ai driven mental health tools which are “intended” to offer immediate, scalable support.

So why does the insurance industry want Ai programs in the mental health field?

The Case For: Why Insurers Want Ai in Mental Health

1) Access, Speed, and Convenience. In many regions, patients wait weeks for an initial appointment. A 24/7 platform can provide immediate support, especially for low-acuity needs such as stress management, sleep hygiene, and mild-to-moderate anxiety symptoms.

2) Standardization and Protocol Fidelity. Ai systems can deliver structured interventions consistently, reduce clinician “drift” from evidence-based protocols, and prompt ongoing practice of therapeutic skills. For payers, this is attractive because standardization is measurable and scalable.

3) Measurement-Based Care at Scale. Ai can administer screeners, track symptom trends, and support follow through between sessions. When used under clinician governance, this can improve continuity and help identify deterioration earlier.

4) Cost Containment and System Efficiency. The economic case is straightforward: lower-cost interventions for appropriate cases, and potentially fewer downstream costs if early support prevents escalation.

The Case Against: Clinical, Legal, and Ethical Risks

1) Therapy Without Clear Clinical Accountability. When a human clinician provides psychotherapy, licensing and standards of care create identifiable responsibility. [Responsibility which seems to be increasingly overlooked or ignored.] With Ai-only services, accountability becomes diffuse; vendor, insurer, developer, or “the user” which is a poor fit for high-stakes mental health care.

2) Safety in High-Risk Scenarios. Crisis states such as suicidality, self-harm, psychosis, and domestic violence are exactly where failure is most consequential. Ai systems can miss context, misinterpret signals, or provide responses that inadvertently increase risk.

3) Mistriage and Oversimplification. Even good clinicians mistriage. Ai can compound the problem if it lacks nuance around comorbidities, trauma histories, neurodiversity, or cultural context. False reassurance is dangerous; excessive escalation can overwhelm human systems.

4) Privacy and Conflict of Interest. Insurance is structurally sensitive. It sits where health data meets claims management and utilization decisions. If therapy content feeds decision making, or even creates a reasonable fear that it could, patients may self-censor, undermining care.

The “Fortune / Yara” Inflection Point … and the Counter-Lesson

The Yara shutdown, as reported, is primarily cited for a blunt conclusion; that even with guardrails, Ai therapy may be too dangerous for people with serious mental health issues. In today’s derivation of Ai therapy, that is an accurate and alarming concern.

A more practical reading is more nuanced and more actionable: the most defensible lane is Ai augmented care, not Ai-as-therapist … yet. The difference is not semantic, it is operational. If an insurer deploys Ai, safety must be built as a system: constrained scopes, explicit disclosures, continuous monitoring, and fast human escalation that works in real life not just on paper. But safety can be very expensive.

And we know when operational constraints meet financial constraints, history dictates operational constraints will be compromised.

Human Frailties, Ideological Drift, and Why This Can Fuel Ai Adoption

A less discussed but increasingly influential driver of Ai adoption is patient dissatisfaction with human variability … including the perception that some therapists allow personal politics or social ideology to shape the therapeutic relationship. [The “ism” police is prevalent among many therapists.]

While many clinicians practice ethically, a subset of patients report experiences where therapy felt judgmental or moralizing, or where they felt pressured into a social or political framework that did not fit their needs. Even if these experiences are not yet the norm, they can be highly salient: a single negative encounter can permanently reduce willingness to seek traditional care.

As clinicians continue to incorporate radical belief systems like White Supremacy Culture, fatphobia, Indigenous Person’s Land Use Acknowledgements, zero sum game thinking, anti-Semitism, the patriarchy and radical political and social justice views into their everyday lexicon, they lose the ability to listen to their patients, to meet their patients where they are in exchange for ethical, insightful therapeutic regimens where the patient’s needs are prioritized.

This dynamic can and will accelerate Ai adoption in several ways:

  1. Demand for predictable, skills-based support. Many users primarily want coping tools rather than worldview driven interpretation. Ai systems can be positioned as consistent, nonjudgmental, and oriented around concrete skill building. For mild-to-moderate conditions, that positioning will attract patients who want help without interpersonal friction.
  2. Institutional preference for auditability and uniformity. Employers and insurers are sensitive to reputational risk and complaint volume. Ai systems can be constrained, logged, and audited in ways that are difficult with individualized human practice. That makes Ai attractive to institutions seeking standardized delivery, especially for early-stage care pathways. Like insurance companies.
  3. A political paradox: “neutrality” becomes a marketing claim—and a target. Ai is not truly neutral. Training data, safety policies, and vendor tuning encode normative assumptions. Over time, the debate will shift from “therapists inject beliefs” to “platforms embed beliefs.” The perceived advantage of Ai (less idiosyncratic bias) may become a liability if users discover a consistent, system-level bias scaled across millions.
  4. Fragmentation into “values aligned” therapy styles. Some users will prefer “politics-free” skills support; others will want culturally specific or worldview aligned care. Ai platforms can offer configurable styles, but that introduces the risk of “therapeutic filter bubbles,” where systems affirm a user’s worldview rather than challenge maladaptive beliefs when appropriate.

The net effect is that concerns about human bias will inevitably increase appetite for Ai mental-health platforms, but they will also intensify demand for transparency, choice, and oversight. Values will not disappear. Instead, they are moved upstream into product design.

Practical Guardrails for Ethical and Defensible Deployment

In the unlikely event insurance companies seriously embrace issues other than financial viability, if insurers want Ai therapy to be sustainable, guardrails must be more than disclaimers. For example, they must adopt and enforce:

  • Truthful labeling: don’t call it “therapy” if it isn’t clinician-delivered.
  • Disclosure: repeated, clear notice when the user is interacting with Ai.
  • Clinical governance: licensed oversight of protocols, risk signals, and escalation criteria.
  • Real escalation: quick handoffs to humans with operational accountability.
  • Data minimization and segregation: limit retention and wall off therapy content from coverage decisioning.
  • User choice: Ai should be an option, not a prerequisite for human care when clinically indicated.
  • Independent audit: safety, bias, and outcomes evaluation.

Nonetheless, the insurance industry is already using Ai. Its growth and usage will be unprecedented.

Conclusion

Ai mental health platforms can widen access and improve measurement-based care, but they also create nontrivial risks: safety failures, blurred accountability, privacy conflicts, and scaled bias. Air-gapped systems may reduce external security concerns and speed institutional adoption, yet they heighten the need for strict internal governance, because the most important question becomes not only what the Ai says—but what insurers do with what members reveal.

Ultimately, patient experiences with human inconsistency, including perceived ideological drift, will accelerate demand for Ai support. But that same demand will fuel a new expectation: transparency about values embedded in systems, meaningful patient choice, and enforceable protections that keep “care” from becoming merely a more sophisticated form of utilization management.

Ai is here and it is only in its infancy. And we are right to question ultimately whether we will remain the master of Ai … or whether Ai will become our overlords. Sadly, I believe it inevitable that we will approach that point in time when we give the command, “Open the pod bay doors, HAL.” And the chilling reply will be, “I’m sorry, Dave. I’m afraid I can’t do that”.