Artificial intelligence is now part of everyday healthcare—from diagnostic tools to scheduling software. But as AI begins to influence clinical decisions, a critical insurance question emerges: Does a typical malpractice policy protect providers when an AI-assisted decision leads to harm?
For brokers working with allied health professionals and small practices, the answer depends on how well existing coverage aligns with modern risk. As AI adoption grows, brokers must begin asking tougher questions about liability, documentation, and policy language that was never written with automation in mind.
Malpractice Coverage in an Automated Landscape
Most malpractice policies are written broadly to cover acts of negligence in the delivery of professional care. That typically includes clinical decisions, even when aided by external tools such as AI software. However, AI complicates that picture in a few ways.
First, errors involving AI often stem from over-reliance or improper use. When a diagnostic algorithm suggests a course of action and the provider follows it without question, courts still hold the provider accountable. Liability is not transferred to the software. Second, software errors—such as bugs, miscalculations, or flawed default settings—may fall into a gray area between clinical judgment and product malfunction. If a claim arises from a purely technical failure, will the malpractice policy respond?
The answer isn’t always clear. While many policies will defend providers whose actions are consistent with the standard of care, some claims involving automation or software can expose limitations in coverage—particularly if the tool in question was not vetted or regulated.
Common Gaps in Coverage
Policy language often predates the AI era. That means exclusions or vague definitions may leave practices exposed. Key areas of concern include:
• Exclusions for harm caused by non-FDA-approved medical devices or technology
• Lack of clarity on whether software failures are covered under professional liability
• Absence of language addressing automated decision-making, telehealth platforms, or third-party AI integrations
Brokers should not assume coverage is comprehensive just because AI isn’t mentioned. In fact, the absence of language may indicate an outdated framework that hasn’t accounted for emerging risks.
How to Evaluate Policy Fit in an AI Environment
Brokers should begin proactively reviewing malpractice policies with AI-specific risk in mind. This includes confirming that “professional services” is defined broadly enough to include the use of AI in diagnostic or support roles, and checking whether any exclusions could be interpreted to apply to technology-assisted decisions.
It’s also important to coordinate malpractice coverage with cyber and errors and omissions (E&O) policies. If an AI tool leads to a data breach, misdiagnosis, or a system failure, a claim may fall between policies if responsibilities are not clearly assigned. In some cases, AI-related harm might require response from both a malpractice and a cyber policy, particularly when connected devices or third-party platforms are involved.
Here are three essential actions brokers should take when reviewing client policies:
1. Examine definitions of covered services and clarify whether AI-driven activities fall within scope.
2. Identify any exclusions tied to technology, automation, or cyber events that could impact claims.
3. Coordinate with carriers to determine how AI-related incidents would be classified and whether additional endorsements are needed.
GET THE SUMMIT
Sign up for news and stuff all about the stuff you wanna know about in your sector twice a month.