|

New Bill Would Let FDA-Approved AI Handle Diagnoses and Prescriptions—Could Algorithms Become Your First-Line Doctor?

What happens when your next prescription is written by an algorithm instead of a human doctor? That’s not sci-fi anymore—it’s the premise of a new bill introduced by U.S. Congressman David Schweikert that would authorize FDA-cleared AI systems to diagnose routine conditions, manage prescriptions, and normalize data from wearables like smartwatches in day-to-day care. Depending on your perspective, that sounds like a revolution, a risk, or both.

In this deep dive, we’ll unpack what the bill says, why it’s surfacing now, the guardrails it promises, the pitfalls critics fear, and how it could reshape healthcare for patients, clinicians, and businesses. If AI really does become a frontline provider for routine medicine, what rights will you have? What’s the path to safety? And who’s accountable if something goes wrong?

Let’s dig in.

The Bill at a Glance: What’s Actually on the Table

According to the announcement from Rep. David Schweikert’s office, the proposed legislation would greenlight AI systems—specifically FDA-approved large language models (LLMs) and other machine learning tools—to perform a suite of routine medical tasks now handled by clinicians. That includes diagnosing common conditions, administering prescriptions within defined risk thresholds, and ingesting real-time data from consumer wearables and medical-grade devices to inform care decisions. The stated goals: reduce costs, relieve physician shortages, and deliver timely care at scale.

Key provisions described in the release include: – Authorization for FDA-cleared AI models to diagnose and prescribe for defined, lower-risk use cases – Requirements for explainable AI outputs and clear audit trails – Mandatory human oversight or escalation for high-risk cases – Normalization and integration of wearable device data for continuous monitoring – Economic arguments around accelerating telemedicine, enabling personalized treatment plans, and delivering savings, especially in underserved areas

Source: Schweikert.house.gov – Bill announcement (Feb. 18, 2025)

Supporters point to accelerating AI performance in pattern recognition—areas like radiology and dermatology—while critics question accountability, bias, and the limits of “explainability” when lives are at stake.

Why Now? The Tech Is Ready(ish), and the System Is Strained

A few converging forces make this proposal feel timely—even inevitable.

  • Physician shortages and burnout: The U.S. faces a growing shortfall of doctors, especially in primary care and rural communities. The AAMC projects a significant gap over the coming decade, which compounds wait times and care delays. See: AAMC physician shortage projections.
  • Maturing AI performance: In imaging-heavy specialties, AI has reached—or in some studies exceeded—specialist-level performance for defined tasks. For example:
  • Dermatology classification with deep learning achieved dermatologist-level performance in research settings (Nature, 2017).
  • AI systems have reduced false positives/negatives in breast cancer screening in retrospective analyses (Nature, 2020).
  • Ubiquity of wearables: Smartwatches and fitness trackers are mainstream, creating oceans of continuous biometric data. While not all wearables are medical-grade, their potential for early warnings is huge. Background: Pew Research on wearables adoption.
  • Computing power and tooling: Industrial-scale infrastructure from players like NVIDIA is supercharging model training and inference for healthcare workloads (NVIDIA Healthcare; NVIDIA Clara).
  • Regulatory scaffolding: The FDA’s Digital Health Center of Excellence, guidance around Software as a Medical Device (SaMD), and ongoing work on AI/ML-enabled devices lay a foundation for safe deployment (FDA SaMD; AI/ML-enabled medical devices). ONC’s HTI-1 rule adds algorithmic transparency requirements for certified EHR tech (ONC HTI-1). And the NIST AI Risk Management Framework gives implementers a shared vocabulary for trustworthy AI.

Put simply: We have a care access crisis and a maturing AI toolkit. The bill tries to connect those dots—fast.

Could an AI Legally Diagnose and Prescribe? How That Would Work

In the U.S., diagnosing and prescribing are typically the domain of licensed clinicians, governed by state scope-of-practice laws and federal rules for controlled substances. For AI to shoulder parts of that role, two things need to align:

1) FDA classification as a medical device – Clinical AI tools that provide diagnostic or treatment recommendations generally fall under SaMD. They need to pass through FDA pathways like 510(k), De Novo, or PMA, with evidence of safety and effectiveness for specific indications. – AI systems can be “locked” (fixed function) or “adaptive” (learning after deployment). The FDA is exploring frameworks like predetermined change control plans for AI/ML devices to update safely without full re-clearance each time (FDA AI/ML devices).

2) Legal authorization to act on output – Even if an AI is FDA-cleared, existing practice typically treats it as decision support—clinicians remain the prescribing authority. This bill would carve out a new pathway so AI could directly trigger prescriptions for specified, lower-risk scenarios, with mandatory human oversight for high-risk or ambiguous cases. – E-prescribing systems would still interface with pharmacy standards (e.g., NCPDP SCRIPT), payer checks, and, where relevant, prescription drug monitoring programs. Context: NCPDP SCRIPT standard, DEA on e-prescribing of controlled substances.

The upshot: The FDA would vet the tool; the statute would define when the tool’s output can trigger care without a human gatekeeper—and when it must not.

What Safety Guardrails Are Promised

Per the announcement, the bill calls for safety-by-design. In practical terms, that looks like:

  • Explainable outputs: Models must show their work—e.g., evidence citations, uncertainty estimates, and rationale pathways—so humans can audit and challenge decisions. Background: DARPA XAI.
  • Risk tiers and escalation: Routine, lower-risk diagnoses and refills might be automated; anything flagged as high-risk, rare, or ambiguous must escalate to a clinician.
  • Audit trails and logging: Every AI encounter, recommendation, override, and data source must be traceable for accountability and continuous improvement.
  • Wearable data normalization: Incoming signals from consumer wearables and medical devices should be harmonized, likely via standards such as HL7 FHIR, to reduce noise and false alerts (HL7 FHIR overview).
  • Post-market surveillance: Ongoing performance monitoring is crucial to catch model drift and emerging failure modes, ideally using real-world evidence frameworks (FDA on real-world evidence).

These concepts align with global best practices like the WHO’s guidance for AI in health regulation (WHO regulatory considerations) and the NIST AI RMF.

The Biggest Questions: Accountability, Bias, and Privacy

Here’s where the rubber meets the road.

  • Who’s liable when AI gets it wrong?
  • Product liability: Manufacturers could face claims if devices are defectively designed, labeled, or tested.
  • Malpractice: Health systems or supervising clinicians could share responsibility if oversight is inadequate or workflows are unsafe.
  • Regulators and insurers will need clearer fault lines. Expect hybrid models of accountability rather than a single “who pays” answer.
  • Bias and equity
  • Models can underperform on populations underrepresented in training data, compounding disparities. Transparency about datasets, subgroup performance, and external validation is essential.
  • Look to frameworks like ONC’s decision support transparency and WHO guidance to raise the bar on fairness and explainability (ONC HTI-1; WHO ethics & governance).
  • Privacy and cybersecurity
  • AI in clinical care must operate under HIPAA and state privacy rules, with robust safeguards for data in motion and at rest (HHS HIPAA).
  • Wearables may fall outside HIPAA if data isn’t held by a covered entity. That creates gray zones for consent and secondary use—an area ripe for abuse without strong controls and FTC oversight (FTC: Keep AI claims in check).
  • Healthcare is a top target for cyberattacks. Programs like HHS 405(d) outline best practices for defending clinical systems (HHS 405(d) cybersecurity practices).

The Case for AI as First-Line for Routine Care

Done right, there are compelling advantages:

  • Faster access, fewer bottlenecks: 24/7 triage and refills for routine issues reduce wait times and free clinicians for complex cases.
  • Continuous monitoring: Wearables and home devices can catch deteriorations early—hypertension spikes, irregular rhythms, COPD exacerbations—prompting timely interventions.
  • Burnout relief: Automating documentation, prior auth prep, and protocol-driven care can lift cognitive load and restore face time for clinician-patient relationships.
  • Cost efficiency: Automation at the edge (home, mobile) can reduce unnecessary ER visits, readmissions, and specialty referrals.
  • Equity upside: If deployed thoughtfully—e.g., language support, low-bandwidth modes, community partnerships—AI can expand access in rural and underserved settings.

The Case Against: Real Risks and Failure Modes

Skeptics aren’t wrong. Pitfalls include:

  • Misdiagnosis and overconfidence: LLMs can generate fluent errors (“hallucinations”). Even high-performing models have blind spots; false reassurance can be dangerous.
  • Poor calibration: Models may be overconfident on out-of-distribution cases or rare diseases. Without robust uncertainty signaling, automation bias creeps in.
  • Alert fatigue: Noisy wearable feeds can overwhelm systems and patients, leading to ignored warnings.
  • Model drift and versioning chaos: Updating models without tight change control can break safety in subtle ways—hence the FDA’s focus on AI/ML device governance.
  • Security and adversarial inputs: Malicious or simply odd inputs can produce unsafe outputs. See: NIST on adversarial ML.
  • Fragmented integration: If AI decisions don’t round-trip cleanly through the EHR, pharmacy systems, and payer checks, errors multiply.

In short: AI can expand access and safety—or amplify failure. Guardrails aren’t optional.

What This Means for Key Stakeholders

  • Patients
  • Expect more AI touchpoints for triage, refills, and monitoring. You should have clear disclosure, opt-out options where feasible, and an easy path to a human second opinion.
  • You’ll want transparency: What data is used? How accurate is the tool for people like you? What happens if it’s wrong?
  • Clinicians
  • Roles may shift from primary diagnostician to supervisor of AI-driven workflows for routine care. That raises training needs around oversight, bias detection, prompt engineering, and escalation.
  • Documentation and billing will evolve as CMS and payers define reimbursement for AI-assisted services (e.g., remote physiologic monitoring). Reference: AMA on RPM codes.
  • Health systems and startups
  • Build a compliance stack: SaMD classification strategy, clinical validation, human-factors testing, auditability, and incident response.
  • Interoperability is non-negotiable: HL7 FHIR APIs, patient access under the 21st Century Cures Act, and TEFCA connections for network exchange (Cures Act info blocking; TEFCA).
  • Payers
  • If outcomes improve and costs drop, expect rapid coverage shifts and value-based incentives for AI-enabled pathways. Conversely, unvalidated tools may face denials.
  • Regulators
  • The volume and complexity of AI submissions will surge. Expect more sandboxes, harmonized standards, and cross-agency coordination (FDA, ONC, FTC, DEA, HHS OCR).

How to Evaluate an “AI Doctor” (Before You Trust It)

Use this practical checklist:

  • Regulatory status: Is it explicitly cleared/approved by the FDA for the claimed indication and population?
  • Evidence quality: Peer-reviewed clinical validation? External validation across diverse subgroups?
  • Transparency: Does it provide rationale, citations, and uncertainty scores you can understand?
  • Safety rails: Clear escalation triggers to a human? Hard stops for high-risk meds and diagnoses?
  • Data governance: HIPAA-compliant (where applicable), encryption, access controls, and a plain-language privacy policy.
  • Drift monitoring: Who watches performance over time? Are updates documented and versioned?
  • Accountability: Named clinical sponsor? Incident reporting process? Patient recourse?
  • Interoperability: FHIR-native integration with your EHR, pharmacy, and payer workflows.
  • Scope clarity: Does it clearly say what it can’t do?

Bonus: Look for “model cards” or equivalent transparency reports describing training data, limitations, and known risks (Model Cards concept).

What Care Could Be Safely Automated—And What Shouldn’t

Likely candidates for early automation

  • Dermatology triage for common lesions (with rapid escalation for malignancy risk)
  • Imaging pre-reads and prioritization (radiology, mammography) with final human sign-off
  • Protocol-based medication refills and dose titration for stable chronic diseases
  • 24/7 symptom triage and routing, including after-hours coverage
  • Documentation assistants and prior authorization summarization
  • Remote monitoring alerts for hypertension, diabetes, heart failure, COPD

Use cases that demand human judgment

  • Complex, multisystem differentials (e.g., atypical chest pain, sepsis)
  • Oncology diagnosis and treatment planning
  • Anesthesia and perioperative decision-making
  • Pediatrics with rare disease presentations
  • Mental health crises and nuanced therapeutic relationships
  • Initiating or managing high-risk and controlled substances

Automation should be a force multiplier—not a replacement—where stakes are highest.

The Legislative Path: What Has to Happen Next

This is a proposal, not a law. Expect: – Committee hearings, markups, and potential revisions, especially around liability and patient rights – Negotiations with medical boards, state governments (scope-of-practice), and professional societies – Input from patient advocates on consent, explainability, and opt-out mechanisms – Intense lobbying from health tech, payers, and provider groups – Implementation details delegated to agencies (FDA, ONC, HHS) if it passes

Also watch the states. Even if federal law sets a baseline, state medical boards and privacy statutes can shape what’s allowed in practice.

How Health Tech and Provider Organizations Can Prepare Now

  • Design for regulation from day one: Map your product to SaMD classifications, validation needs, and post-market monitoring.
  • Build explainability and human-in-the-loop into the core product, not as a patch.
  • Adopt common risk frameworks and quality systems (NIST AI RMF, ISO, GxP where applicable).
  • Invest in real-world evidence generation and prospective studies with diverse populations.
  • Harden security; rehearse incident response; plan for adversarial testing.
  • Integrate cleanly via FHIR; ensure audit logs persist in the clinical record.
  • Engage clinicians as co-designers; train for oversight and escalation.
  • Avoid hype. The FTC is watching AI marketing claims (FTC guidance).

The Economic Angle: Where Savings Might Show Up

  • Reduced labor time for documentation, triage, and routine management
  • Fewer avoidable ED visits through earlier intervention
  • Lower readmissions via continuous monitoring and timely nudges
  • Better formulary alignment and medication adherence
  • Scalable after-hours coverage without staffing spikes

Savings aren’t automatic. They require tight integration, robust guardrails, and clinician buy-in. But the potential is real if the system avoids simply shifting work to patients or creating new administrative burdens.

Bottom Line: A Bold Bet on Automation—with Human Failsafes

The Schweikert bill frames a future where AI is a frontline provider for routine care: diagnosing common conditions, refilling meds, and watching over your vitals from your wrist. That could open access, save money, and reduce burnout. It could also go sideways—fast—without rigorous evidence, clear accountability, and real transparency.

The smart path forward isn’t “AI replaces doctors.” It’s “AI handles the repeatable and the routine, while humans handle the complex, the uncertain, and the deeply human.” If lawmakers can encode that balance—with teeth—this could be one of the most consequential healthcare reforms of the decade.

Frequently Asked Questions

Q: Does this bill mean AI will replace my doctor? – Not wholesale. The bill targets routine, lower-risk tasks with mandated human oversight for complex or high-risk cases. Think “AI as frontline for simple stuff; humans for the rest.”

Q: Is it safe to let AI diagnose and prescribe? – Safety depends on tight FDA regulation, strong evidence, and clear escalation rules. Many AI tools already assist clinicians safely in narrow domains (e.g., imaging). Autonomous actions require extra scrutiny.

Q: Who’s liable if the AI makes a mistake? – Expect a mix: product liability for manufacturers, malpractice exposure for providers or health systems overseeing workflows, and payer/regulatory implications. The bill will likely need clarity here.

Q: Can AI prescribe controlled substances? – Even with new authority, federal and state rules around controlled substances (e.g., DEA EPCS requirements) would still apply. High-risk meds would likely require human oversight or be excluded. See: DEA EPCS FAQ.

Q: What about my data privacy? – Clinical deployments are typically covered by HIPAA, but wearable data may not be unless routed through covered entities. Read privacy policies and opt out of data sharing you don’t want. More: HHS HIPAA.

Q: How accurate is AI today? – It varies by task. In imaging, AI can match or exceed specialists for well-defined problems in research settings. In open-ended diagnosis and complex care, performance is far less predictable and demands human oversight. Examples: Dermatology AI, Breast cancer screening AI.

Q: Do I have to use AI if this passes? – The details aren’t final, but patient choice and clear disclosure are core ethical principles. Many expect opt-outs and easy access to human clinicians.

Q: Which wearables “count”? – Medical-grade devices with validated accuracy will carry more weight. Consumer devices can be useful signals but may need confirmation. The bill’s “normalization” language points to harmonizing data, not blindly trusting every step count.

Q: How will my clinician’s job change? – Expect more oversight of AI-driven workflows, plus relief from routine burdens (documentation, refills). New skills will include interpreting AI outputs, spotting bias, and knowing when to escalate.

Q: When could this take effect? – It’s early. A bill must move through committees, votes, and implementation guidance. Even then, only FDA-cleared tools would qualify, and organizations would need to integrate them safely.

Clear Takeaway

AI is poised to become your first touchpoint for routine healthcare—triaging symptoms, managing refills, and watching your vitals in the background—if Congress codifies a rigorous, safety-first framework. The prize is faster, more affordable care. The price of getting it wrong is patient harm and broken trust. The win is not replacing doctors, but giving them superpowers—and giving patients safe, timely care wherever they are.

Discover more at InnoVirtuoso.com

I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.

For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring! 

Stay updated with the latest news—subscribe to our newsletter today!

Thank you all—wishing you an amazing day ahead!

Read more related Articles at InnoVirtuoso

Browse InnoVirtuoso for more!