A 'doctor' with no pulse
MEDICAL SCIENCE

A 'doctor' with no pulse

C

Chinmay Chaudhuri

Author

Published

US state pilots autonomous AI prescribing, challenging the ethics, legality and human touch of the medical profession

New Delhi: The practice of medicine may have just crossed a threshold from which there is no easy return. In January, the US state of Utah unveiled a partnership that sounds like science fiction but is now firmly rooted in policy: an AI system designed to prescribe medication without any physician’s involvement. Not assist, not advise… but replace, according to an article published online in The Journal of the American Medical Association (JAMA).

The technology, developed by Doctronic, promises to replicate the clinical reasoning of a licensed doctor. It conducts what is described as a “comprehensive medical assessment”, mimicking the judgment calls that traditionally require years of medical training.

Initially deployed for prescription renewals, it is expected to expand rapidly, with the capacity to prescribe nearly 200 drugs — from anti-depressants and statins to hormones and blood thinners.

For a healthcare system strained by physician shortages and rising costs, the proposition is as seductive as it is unsettling (read scary). The white coat, long a symbol of human judgment and accountability, may soon have a digital counterpart. One that never tires, never sleeps, and never sees a patient.

Promise of Precision, Risk of Absence

The allure of AI prescribing lies in its efficiency. Medication adherence remains one of the most persistent challenges in US public health. Studies show that many patients don’t follow through with their medicines. Around 4% to 31% of people never even get their first prescription filled at the pharmacy—that is, they don’t collect the medicine after a doctor has prescribed it. Among those who do start their medication, nearly 20% do not go back to get a refill when needed. The financial toll is staggering. Non-optimized medication use costs the United States an estimated $528.4 billion annually.

AI, its proponents argue, could close this gap. By streamlining prescription renewals and reducing human error, it may ensure that patients receive timely, consistent care, say the authors of the article. It could also free physicians to focus on complex diagnoses and the nuanced human interactions that machines cannot replicate, they highlight.

Yet, the very efficiency that makes AI attractive also raises profound concerns. Medicine is not merely a transactional exchange of symptoms and prescriptions. It is a “relationship” through which subtle cues, unspoken anxieties, and incidental findings often shape outcomes. Remove the physician, and those moments of discovery may vanish. A machine may renew a prescription flawlessly, but it may not notice the tremor in a patient’s hand or the hesitation in their voice.

Insight Post Image

What is unfolding is not merely a technological upgrade; it's a philosophical shift in how society defines care, responsibility, and trust. The question is no longer whether AI can act like a doctor. It is whether we are prepared to accept a doctor that is not human.

Legal Grey Zone

What makes Utah’s move particularly consequential is not just the technology itself, but the legal vacuum into which it steps. For decades, US law has been unequivocal: prescription drugs must be dispensed under the supervision of a licensed practitioner. AI, by definition, does not meet that standard.

Utah has effectively sidestepped this requirement through a regulatory “sandbox”, temporarily waiving state laws to allow experimental technologies to operate. The federal implications, however, remain unresolved. Under existing statutes, an AI prescriber qualifies as a medical device and should fall under the oversight of the Food and Drug Administration (FDA).

Yet, the company behind the system has not sought FDA approval, arguing that the agency does not regulate the practice of medicine. While technically true, this interpretation overlooks a critical boundary: the FDA does regulate the tools used in medical care, especially those that directly influence treatment decisions. By removing the physician altogether, AI prescribing does not merely assist medical practice, it redefines it.

In the current political climate, federal intervention appears unlikely. A broader push to encourage AI innovation has led to the rollback of regulatory safeguards, creating an environment where technological advancement may outpace legal accountability.

Point of No Return

History offers a cautionary tale. Once a technology becomes embedded in everyday life, regulating it becomes exponentially more difficult. This is the essence of the Collingridge dilemma: early intervention is hampered by uncertainty, while delayed action is constrained by entrenchment.

AI prescribing now stands at that precarious intersection. Its early results, largely produced by company-affiliated studies, claim diagnostic alignment with physicians in 81% of cases and treatment agreement in 99.2%. But these findings remain unverified by independent, peer-reviewed research, the article cautions.

If adoption accelerates before rigorous safety standards are established, the healthcare system may find itself locked into a model that is difficult to reverse. Legal remedies, such as malpractice or product liability claims, offer only limited recourse. Patients harmed by AI decisions would face complex legal hurdles, from proving fault in opaque algorithms to establishing standards of care for machines.

What is unfolding is not merely a technological upgrade; it is a philosophical shift in how society defines care, responsibility, and trust. The question is no longer whether AI can act like a doctor. It is whether we are prepared to accept a doctor that is not human.

For now, the answer is being written not in clinics or courtrooms, but in code and in the quiet decisions of policymakers willing to let the experiment begin.