By: Rebecca L. Rakoski, Esquire
Patrick D. Isbill, Esquire
The warp speed at which the medical profession is attempting to integrate artificial intelligence (AI) into the diagnosis and treatment of patients feels eerily similar to the excitement that preceded the internet and subsequently social media. A conscious leap forward with blinding advantages being evident on the surface but little to no thought instead as to what lies ahead in just the next several steps. There is no doubt that the benefits of AI to be conveyed to the human condition cannot be overstated. But with implications for patient safety, fault liability, and even patent originality and creativity, these next steps are beginning to resemble the classic cautionary phrase – sewing the parachute in free fall.
And just what kind of intelligence are doctors relying on? So many legal scholars and thought leaders are going with the argument that AI will serve as an aid by reducing the number of decisions needed to be made. Others in seemingly equal amounts however deride such a notion and argue ideally for AI to simply augment decision making in its entirety rather than segmenting those decisions into discrete parts.
Traditional views of the law will need to evolve with the times – and fast. As any experienced attorney knows, this is undeniably a tall order. Law moves slowly of course, especially when it is new. Technology typically rushes ahead with understandable, yet unrestrained, enthusiasm while regulations and any guardrails are hurriedly added much later in its evolutionary development. Lawmakers, and to a large extent the public, should view AI through the prism of fact versus science fiction. In other words, how it is actually making decisions and not strictly what decisions can potentially be made.
Basic Liability Triangle
Any discussion over legal liability when it comes to AI in medicine must foremost readily acknowledge how expansive such a discussion seems to be and how underdeveloped case law is at this point. At its essence though, liability for the sake of discussion commonly has three (3) beginning components – doctor, software developer, and maybe somewhat surprisingly litigant AI.
First, a doctor is tasked with practicing, and more to the point is licensed and certified to practice, medicine. Discussions usually center on how AI can either partially replace such decision making by outright synthesizing an answer to a diagnosis or it can augment a diagnosis by providing part of an answer never thought of in the first place. This potentially raises of course both legal and ethical issues around duty of care under negligence pertaining to use of AI, fault for relying on it, and liability for creating an injury. While AI may be a tool doctors use, a doctor’s license is still the authority by which he/she can legally use that instrument.
Next, when it comes to software developers and manufacturers, an obvious but equally weighty argument can be made for liability over algorithm inaccuracy, or how AI was taught to think. Fault could lie with the software company who supplied knowledge/information, maybe sourced from a third-party provider, that was relied upon by a programmer and/or designer to create a system. Similar to a defective product, the law could nevertheless shield a company from the misuse or unintended use of AI, but not from foreseeable defects relied upon in the algorithm that led to injury as a result of violating a duty of care. In addition, a host of defenses could counter any such claims, like assumption of risk, modification, and injury not related to the product.
Issues such as third-party exploitation immediately come to mind too. For example, a third-party criminal actor who intentionally manipulates an AI medical application leading to serious harm or even unintentional misuse by a medical professional could then raise arguments for a reduction in liability on behalf of the developer or even none at all. Generally, there are always threats from a digital breach or data incident that could lead to the penetration of cybersecurity defenses, possibly leading to a known or unknown corruption of data that could open a company up to any one or all of the following: strict liability, medical negligence, products liability, damages over reckless safety protocols, violation of regulatory security policies and procedures, criminal prosecution, and so forth.
Lastly, litigant AI presents a tenuous legal challenge because it is where liability actually is but still not clear how it can be defined. Under their medical license, doctors make the call on the medical course of action but what about how those procedures are done. Technique after all may be within the province of AI. Should the doctor be liable for making the decision as a surrogate, the developer/manufacturer for coding/training the AI to use a particular technique or thought, or is there a comparative negligence issue set to emerge? And should there be a candid examination of the ethical implications of a non-licensed AI practicing medicine?
AI is by definition the entity taking the action, i.e., the actor, and the law will have to wrestle with the concept of treating it that way. So if a doctor has decision-making authority to use it, and the software developer makes the decision regarding the means to carry out those orders, then by logic extension there is an argument for AI third-party liability that is tasked with a further independent decision to put both of those concepts together in executing the requested action. In light of the concept of litigant AI, it is by no means an understatement to therefore suggest that the degree of AI’s use in medicine could have significant implications when raising concerns over informed patient consent and likely redefine the fundamental tenements of tort law liability when it comes to professional services.
Legal Conundrum
To use AI or not to use AI is not so much a question as it is already the answer. Liability attached to using AI is being well-debated in many legal circles, but what about a medical professional who decides not to use AI as part of any diagnosis or course of treatment? Two crossing professional constructs instantly become clear – negligence (legal) and an ethical duty of care and competence (medical). Doctors must not only be competent to provide treatment, but they must also keep up-to-date with the latest research and medical guidelines or risk malpractice.
Thus, in being required to act in the best interest of their patients, doctors must likewise afford their patients both accurate and timely information about their diagnosis and options for treatment. AI may certainly have to be included in that equation, by definition no less, or a medical professional may arguably risk violating an oath to provide action in the best interest of the patient.
Proposed Metrics for AI Liability
From a legal standpoint, domestic regulatory agencies may not be taking definitive action (yet) but are certainly taking notice. The European Union (EU) has in predictable fashion put its foot forward on the issue with the European Parliament adopting on March 13, 2024 the Artificial Intelligence Act (AI Act). According to news agencies, the AI Act could set a standard of sorts on how technology like AI is applied in business and impacts the lives of everyday people. Likewise, the European Commission put out in September 2022 two (2) proposed directives, i.e., AI Liability Directive (AILD) and Product Liability Directive (PLD), that examined the intersection of liability and AI. The Commission recently withdrew the proposal for an AILD, but the PLD was adopted last year after a major overhaul and will be implemented by late next year.
Domestically, the Federal Trade Commission (FTC) proposed in April 2020 guidelines for businesses on “Using Artificial Intelligence and Algorithms.” It stressed then that “while the sophistication of AI and machine learning technology is new, automated decision-making is not.” Accordingly, and in unequivocal terms, the business guide set forth that “[t]he FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.”
The last proposal, i.e., accountability, is of particular note given the aforementioned discussion centered on third-party exploitation of AI from lapses in cybersecurity and data integrity practices. The FTC warned operators of an AI algorithm to “ask questions before you use the algorithm,” “protect your algorithm from unauthorized use,” and “consider your accountability mechanism.” All valuable points leading back to the earlier mandate for the responsible use of AI and potentially reducing, among other things, errors and injury toward the patients that the algorithm was meant to serve.
With case law still at the early stages and legislators trying to catch up here in the United States and elsewhere worldwide, contracting around liability may be the most realistic near-term solution. For example, writing into the contract any exoneration from civil liability, setting both a liability ceiling and floor, apportioning damages based on factors like error or misconduct, defining duty of care and attendant penalties for disregarding any such professional standards (reckless or otherwise negligent), presetting degree of fault based on acts and omissions, and so forth. In that vein, pooling risk under AI liability insurance may also be an avenue to not only mitigate liability but spread out the costs as well.
For the healthcare industry specifically, a list of issues and legal questions arise when it comes to AI technology. For example, patient notices should at least consider mentioning AI, and the law should likewise examine at least the parameters of patient consent to the use of AI in their treatment. More to the point, no one should overlook or misconstrue the benefits to be gained – raising diagnostic accuracy and predicting medical events are tremendous advances for health and welfare. But by focusing on the end, we must not disregard the means we employed to get there.
Conclusion
Legal recourse is after all sometimes the only way to achieve meaningful reform of certain policies, achieve higher ethical standards governing the use and development of emerging technologies like AI, compensate an injured party arising from negligence and use of such technology, and challenge established but likewise unjust procedures. As the functionality of AI grows, one can easily see ethical questions arising around the area of medicine and professional conduct, redefining the meaning of a doctor’s duty of care while remaining moored to the traditional principle of doing no harm.
No doubt a high bar when it comes to setting boundaries, but one that can be achieved with a proactive mindset in meeting the rising legal challenges of digital technologies around AI. Because ready or not, AI is here to stay. Noble intentions must not eclipse any pitfalls on the road ahead and where carefully thought-out legal guardrails beforehand could head off any consequences stemming from liability, unintended or otherwise.
Reprinted with permission from the September 26, 2025, issue of the New York Law Journal. Further duplication without permission is prohibited. All rights reserved. © 2025 ALM Media Properties, LLC.
This article does not constitute legal advice or create an attorney-client relationship. The information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.
