HOW TO FIGHT DENIAL OF YOUR HEALTH INSURANCE CLAIM

An insurance company denying your legitimate, desperately needed health insurance claim has become all too common, an ordinary way of life … and a large profit center for those insurance companies.

Finally one attorney, Brian Hufford, has dedicated his practice to addressing this widespread problem. But first, let’s look at the alarming statistics.

In 2023, insurers on the HealthCare.gov marketplace denied an average of 19% of in-network claims and 37% of out-of-network claims. Denial rates varied widely by insurer, ranging from as low as 1% to over 50%.

Surprisingly, the most common reason for denial isn’t related to medical necessity at all. A full 34% of denials fall under the nebulous category of “Other”—an unspecified catch-all that gives insurers maximum flexibility and patients minimum clarity. When these vague denials are appealed, they’re overturned approximately 55% of the time, suggesting that the majority have no solid justification.

Administrative issues account for another 18% of denials. These include coding errors, missing information, or duplicate claims—technical issues having nothing to do with whether the care was appropriate or covered under the policy. These denials have the highest overturn rate at 78%, as they’re often simple misunderstandings or clerical errors that can be easily corrected.

Claims categorized as “service not covered” make up 16% of denials. While these have a lower overturn rate of about 35%, successful appeals often demonstrate that the service actually does fall under covered benefits when policy language is properly interpreted or when the medical necessity is clearly established.

Prior authorization issues cause 9% of denials, with patients receiving care without getting the insurer’s permission first. These have a 65% overturn rate when appealed, particularly when the care was urgently needed or when the provider can demonstrate they attempted to secure authorization.

Perhaps most concerning are the “not medically necessary” denials, which represent 6% of cases. These denials essentially second-guess your doctor’s judgment about what care you need. Yet when patients and their doctors challenge these determinations, they succeed approximately 70% of the time—an alarming discrepancy that raises questions about how these decisions are made in the first place.

Despite these high denial rates, fewer than 1% of denied claims are ever appealed by consumers. A survey found that 85% of patients never file a formal appeal, often due to a lack of awareness of their appeal rights or the complexity of the process.

When consumers and providers do appeal, they have a strong chance of success. According to a recent KFF survey, patients who took the time to appeal their denials experienced a 44% success rate with initial internal appeals—meaning nearly half of all challenges succeeded in the first round. For those whose internal appeals were rejected and who proceeded to external review, an additional 27% succeeded at that level.

When healthcare providers manage the appeal process, over 54% of initially denied claims are ultimately paid after multiple rounds of review. Some sources suggest that up to 80% of appeals can be successful when pursued effectively.

In summary, despite the fact that while claim denials are common, and patients and providers who navigate the appeals process often succeed in getting the denial reversed, the vast majority of denials go unchallenged. 

The 2023 KFF Survey of Consumer Experiences with Health Insurance found that 58% of insured adults said they have experienced a problem using their health insurance, including denied claims. Four in ten (39%) of those who reported having trouble paying medical bills said that denied claims contributed to their problem.

Each denial costs medical practices, on average approximately $43 to process, creating over $19 billion in administrative waste annually across the healthcare system. Small practices often spend more than 12 hours weekly wrestling with insurance companies over denied claims.

By making the process difficult and opaque, they ensure most people simply give up and pay out-of-pocket, or worse, forgo necessary medical care altogether. The financial result is billions in unpaid claims that boost insurance company profits while shifting costs to patients.

And at least one man, one attorney has had enough. Brian Hufford was one of the lead attorneys in the Wit v. UBH case still pending in California.

Briefly, David Wit along with other insureds brought a class action lawsuit challenging United Behavioral Health’s (UBH) use of flawed, financially motivated internal guidelines to deny coverage for mental health and substance abuse treatment, rather than applying generally accepted standards of care. The district court initially found during a class action trial that UBH violated the terms of its health insurance policies and breached its fiduciary duties under ERISA, ruling that UBH’s internal guidelines were defective and more restrictive than generally accepted standards of care. The Court of Appeals reversed this decision on the benefit claim and dismissed those class claims but sent it back to the district court to determine if the fiduciary duty findings should remain. Upon reconsideration, the district court again found that UBH breached its fiduciary duties. The case is on-going.

Despite that case still being active, Brian left his firm to start his own practice. After spending a career founding and running health insurance dispute practices at private firms, representing patients and clinicians against insurance companies, Brian opened his own practice as of July 1, 2025, to focus on public policy and advocacy.

His primary work is to expand help for people appealing health insurance denial. As the statistics show, this is a service that is wholly lacking in our current system. To address this matter, Brian is working with law schools to provide pro bono opportunities to law students who assist with health insurance appeals, working under his supervision. He is coordinating this effort through the People’s Action nonprofit, which is pursuing a Care Over Cost campaign, and Brian is serving as legal advisor to its National Appeals Team. Brian is also working as Senior Legal Advisor to Claimable, Inc., a start-up that is using AI to systematize health insurance appeals (www.getclaimable.com/).

If you have people who have been subjected to denials and need help with appeals, feel free to contact Brian (which is pro bono through his law school project). Depending on the number of patients who reach out, he may also connect people to Claimable for assistance.

Brian’s website is below, which has a link to a form patients can fill out. The form is automatically forwarded to Brian, and he will then follow up. You can contact Brian with any questions you may have.  

This crucial resource for our families is so incredibly important. And could very well mean the difference between you getting the necessary care you or your loved one need versus suffering from an unjust system.

Embrace a better future.

For more information, go to:

About Us

LIVE OR DIES … Ai DECIDES

Your 18-year-old daughter, who is struggling with severe anorexia, desperately needs a higher level of care. Biologically, her organs are failing. You make a claim with your health insurance company. And you receive a denial.

You quickly research and then discover an Ai program utilized by the insurance company made the decision to deny saving your daughter’s life.

Welcome to the world in which we live. Where Ai programs may be making life and death decisions about your loved ones. That is the very harsh reality. So, let’s explore that reality.

First, what is “artificial intelligence?” The term itself is so vague as to be mystifying. What makes it artificial? The fact that human beings invented it? That it is silicone based instead of carbon based? Is the programmed intelligence, which is designed to learn at a rate far faster than humans can possibly comprehend, deemed artificial because it lacks a sentient existence?

Is Ai artificial because whereas it may “learn,” it does not experience the subtle nuances and life experiences which make us all unique? Does Ai have a soul? For that matter, do we?

Regardless, with Ai still being in an early stage of development, and with Ai’s developing interaction with humans, we must find ways to build guard rails so that Ai is not in a position where it could singularly make life and death decisions. Decisions which are often made by health insurance companies when deciding to pay, or not pay, for life saving surgeries or treatment. Or is it already too late?

Imagine if you will, an Ai program being utilized, without human interaction, to review and decide a claim or an appeal of a claim for a higher level of care, or to receive necessary treatment or to receive a life-saving procedure. An Ai program with no human experiences, no ethics, no soul, no subtlety, no morality. To leave our very existence in the hands of a machine, a machine that cannot love, cannot experience sorry, or joy, or happiness, or despair. And yet … that is happening. Today.

In 2020, UnitedHealth Group division Optum acquired naviHealth and its algorithm for predicting care, called nH Predict, which UnitedHealth uses and contracts out to other insurers, including Humana. Multiple industry sources estimate that Optum paid at least $1.1 billion dollars and when considering debt and related financial structuring—the purchase price is estimated to be as high as $2.5billion. When asked by the Guardian, a spokesperson for UnitedHealth Group denied that the algorithm is used to make coverage decisions. [Like when UBH denied it ran its guidelines through its accounting and finance departments?] 

UnitedHealth, Humana and Cigna are facing class action lawsuits alleging the insurers unethically relied upon Ai generated algorithms to deny lifesaving care.

One of the lawsuits alleges that Cigna denied more than 300,000 claims in a two-month period. This equates to spending approximately 1.2 seconds for each presumably physician-reviewed claim. Such a practice is aided by algorithms, the lawsuit alleged.

The Cigna lawsuit also alleged that nH Predict had a 90% error rate, meaning nine out of 10 denials were reversed upon appeal – but that vanishingly few patients (about 0.2%) appeal their denied claims, leading them to pay bills out of pocket or forgo necessary treatment.

Appealing denied claims means big business. The US Centers for Medicare and Medicaid Services estimate that when insureds appeal initial denials administrative costs for insurance providers exceed $7.2 billion annually.

According to a United States Senate Report issued in October 2024, UnitedHealthcare, CVS and Humana – the three largest providers of Medicare Advantage, together provide almost 60% of all Medicare Advantage coverage – but reject prior authorization claims at higher rates using technology and automation. That report can be found here:

To support the implementation of Ai, health insurance companies argue that Ai programs streamline claims processing, more effectively flag fraud, and promise greater speed, efficiency and cost savings.  They claim that by automating routine claims, Ai frees up human reviewers to focus on complex or borderline cases that require medical judgment and nuance. (For that matter, don’t all claims require medical judgment?)

Despite its alleged advantages in claims processing, Ai has faced fierce criticism, especially when its role extends to denying coverage or appeals for essential care. Ai is not immune to flaws, as its decisions depend on data quality and programming — both of which can perpetuate mistakes or systemic biases. Garbage in Garbage out.

Many Ai systems operate opaquely, leaving patients, providers, and even insurers unsure how specific decisions are made. This undermines trust and impedes meaningful appeals.

Numerous lawsuits allege that Ai tools prioritize cost-saving over medical necessity. In some cases, Ai has overridden physician recommendations, resulting in denials of rehabilitation, mental health services, or life-saving treatments.

There is a widespread perception—and often a harsh reality—that health insurers prioritize profits above the needs of their insureds. Ai tools, by automating denials or aggressively limiting coverage, can exacerbate this distrust, especially when decisions feel impersonal or unjust.

Critics argue that Ai systems are often deployed as “rubber stamps,” with little or no meaningful physician review—contravening legal and ethical obligations.

Meanwhile, states like California have moved to ban Ai-only coverage denials, signaling a wave of regulatory intervention.

As for those health insurance companies which utilize Ai alone to decide claims or appeals, the major issues focus on:

Risk of Profit-Driven Bias: Ai tools influenced by financial priorities may embed cost-saving incentives that override medical necessity, echoing problems revealed in the Wit v. UBH case.

Lack of Clinical Nuance: Ai lacks the ability to fully understand complex medical contexts or patient histories that human clinicians evaluate.

Transparency and Accountability: Patients have a right to clear explanations and meaningful appeals, which Ai-alone systems often fail to provide.

But that is where we are. Ai is being utilized by insurance companies to decide claims and appeals. Although the insurance companies may deny this fact, it is a reality. Especially since widespread use of Ai in denying claims and appeals will result in much greater profits for these companies.

To counter this reality, the future must be shaped by the following:

Stronger Regulatory Frameworks

States and potentially federal regulators are developing rules to ensure Ai complements—not replaces—human medical judgment. Requirements for physician involvement, transparency, and appeal rights are expected to expand.

Increased Legal Scrutiny

As lawsuits proceed, courts will clarify the legal boundaries of Ai’s role in coverage decisions, particularly under ERISA, Medicare Advantage rules, and consumer protection laws.

Pressure for Transparency and Explainability

Insurers may face mounting demands to disclose how Ai tools function, how decisions are made, and how patients can challenge automated denials.

Smarter, More Ethical Ai Development

Future Ai systems may incorporate safeguards to avoid wrongful denials, improve alignment with medical standards, and enhance explainability.

Ai’s exploding involvement, or interference in our lives will only increase. That is inevitable.

There is the potential that Ai can make health insurance claims processing faster, fairer, and more efficient—but only if deployed responsibly. It must address not only human fallibility but also the systemic distrust stemming from the reality that insurers prioritize profits over patients. Lessons from Wit v. UBH remind us that financial influence over clinical decisions can have devastating consequences, a cautionary tale for Ai implementation.

As courts, lawmakers, and the public demand accountability, the health insurance industry faces a pivotal choice: embrace Ai as a tool to support—not supplant—human expertise, or risk eroding trust and facing costly legal consequences.

The future of Ai in health insurance is not just a technological issue—it is a legal, ethical, and societal issue. Right now, the live of your loved one may very well depend on a machine. On Ai. A lifeless, soulless computer program devoid of all emotion, mercy and humanity.

That is our reality right now. Allow yourself to contemplate that reality and perhaps yes, be afraid. For our future depends on wisdom far greater than humanity has ever demonstrated. Our health depends on it. Our very lives depend on it.