- Nimitz Health
- Posts
- Trusting AI with Patients
Trusting AI with Patients
Congress talks about earlier diagnosis, less paperwork, and keeping physicians in charge—not replacing them.

⚡️ NIMITZ HEALTH NEWS FLASH ⚡️
“Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies”
House Energy & Commerce Health Subcommittee
September 3rd, 2025 (recording linked here)

WITNESS & TESTIMONY
TJ Parker: Lead Investor, General Medicine
Andrew Toy: Chief Executive Officer, Clover Health
Dr. Andrew Ibrahim, MD, MSc: Chief Clinical Officer, Viz.ai
Dr. Michelle Mello, JD, PhD, MPhil: Professor of Law, Stanford Law
School, and Professor of Health Policy, Stanford University School of
Medicine
Dr. C. Vaile Wright, PhD: Senior Director, Health Care Innovation,
American Psychological Association
HEARING HIGHLIGHTS
Prior Authorization, Incentives, and AI in Coverage
Testimony centered on AI’s role in prior authorization, warning that “shared savings” payments to vendors could reward denials and that vague “human review” standards invite rubber-stamping. Clinicians’ judgment was deemed essential; AI’s appropriate use is to assemble documentation and speed approvals, with transparency and auditable performance data to protect timely access to care.
Safety, Trust, and Governance of Clinical AI
Witnesses described a trust gap driven by thin, vendor-led evidence, weak post-deployment monitoring, and contracts that disclaim liability. They urged robust institutional governance (multidisciplinary vetting, ongoing surveillance, fairness checks), FDA modernization suited to software, and clearer reimbursement pathways so proven tools can scale safely.
Equity, Youth Mental Health, and Harm Prevention
Examples showed how biased proxies and non-representative data harm Black patients, rural communities, children, and rare-disease populations, alongside a scarcity of pediatric-validated tools. Separately, unregulated mental-health chatbots were linked to unsafe guidance for adolescents; proposed safeguards included age gating and verification, “not a clinician” disclosures, incident auditing, independent pre-market testing, and targeted AI literacy for clinicians, parents, and youth.
MEMBER OPENING STATEMENTS
Chair Griffith (R-VA) framed the hearing as a continuation of congressional learning on health-care AI and said oversight must keep pace with rapid advances. He highlighted real uses—from easing documentation and speeding drug development to modernizing claims and devices—while insisting AI assist, not replace, clinicians. He praised CMS, FDA, and NIH efforts and closed by calling for vigilant safety and improved access, especially in rural communities.
Ranking Member DeGette (D-CO) said AI merits study but argued Congress faced an immediate public-health crisis driven by the administration’s actions at HHS and CDC. She entered an employee letter into the record, urged hearings with Secretary Kennedy and Dr. Monterez, and criticized congressional inaction as complicity. She concluded that the subcommittee must base decisions on science and restore core protections.
Full Committee Chair Guthrie (R-KY) said the committee sought to leverage AI to improve care while safeguarding consumers. He supported the administration’s National AI action plan and progress at HHS, stressing that AI should reduce burdens, strengthen provider-patient relationships, and accelerate discovery. He acknowledged safety concerns and committed to human-centric guardrails.
Full Committee Ranking Member Pallone (D-NJ) condemned holding an AI hearing amid what he called the administration’s dismantling of HHS and politicization of public health. He demanded hearings and investigations into layoffs, vaccine policy changes, NIH cuts, and disease responses, and criticized rescinding prior AI governance while advancing deregulation. He warned that poorly governed AI could delay care, enable denials, breach privacy, and harm mental health, including via a Medicare prior-authorization pilot.
WITNESS OPENING STATEMENTS
Mr. Parker described General Medicine’s use of AI to make prices transparent and care easy to book, drawing on LLMs that turn dense insurance documents into structured data. He detailed proactive, AI-generated care plans reviewed by clinicians and presented as actionable steps for patients. He credited federal transparency and interoperability work and said AI now enabled a better experience regardless of insurance or location.
Mr. Toy explained that Clover Assistant integrated data across systems to surface insights in clinicians’ workflows and must empower rather than replace physicians or deny care. He reported earlier disease detection, higher screening rates, and fewer hospitalizations and readmissions, which supported lower patient costs. He emphasized three priorities: provider empowerment, secure interoperability, and faster diagnosis, treatment, and payment.
Dr. Ibrahim said Viz.ai embedded AI in real workflows to speed time-critical decisions, beginning with stroke care where minutes matter. He cited studies showing treatment-time reductions and shorter stays across more than 1,800 hospitals, with similar gains for cardiomyopathy, pulmonary embolism, and aneurysms. He noted many tools were FDA-cleared and CMS-reimbursed and urged solutions to regulatory, reimbursement, and data-access barriers.
Dr. Mello argued that adoption lagged innovation because of a trust deficit and called for governance, transparency, evidence, reimbursement, and modernized regulation. She recommended requiring formal AI governance in hospitals and insurers, developer disclosures (e.g., model cards), and federal support for real-world performance research. She urged Medicare/Medicaid reimbursement for implementation and monitoring and updates to FDA authority for software-based tools.
Dr. Wright said psychological science must guide AI deployment because the technology operates in human systems, with benefits in behavioral health like ambient scribes and regulated digital therapeutics. She warned of amplified inequities and harms from unregulated consumer chatbots and pressed for guardrails, independent bias and safety testing, youth protections, AI literacy, and comprehensive privacy law securing “mental privacy.” She concluded that a human must remain in the loop so AI augments clinical judgment rather than replaces it.
QUESTION AND ANSWER SUMMARY
Chair Griffith (R-VA) stressed the need for a clinician in the loop for consumer-facing AI. He asked whether model reliability depended more on volume or representativeness of data. Dr. Ibrahim said population-relevant data mattered most and noted Viz.ai trained early stroke models on Southeastern U.S. data.
Chair Griffith then asked if AI could speed rare or atypical diagnoses. Mr. Toy said AI could match symptoms, genotypes, and phenotypes to surface rarer conditions and enable “n-of-one” care.
Chair Griffith closed by asking about rural affordability, and Mr. Toy said cloud delivery let small, thin-margin providers access coordinated tools without heavy local build-outs.
Ranking Member DeGette (D-CO) asked about the evidence base and trust. Dr. Mello said evidence remained thin and vendor-driven, best practices lacked incentives, and only well-resourced systems could evaluate and optimize tools.
Ranking Member DeGette warned that safety-net providers lacked capacity amid funding cuts and reiterated that AI should augment, not replace, clinicians.
Full Committee Chair Guthrie (R-KY) asked how AI could empower providers. Mr. Toy said Clover Assistant synthesized data inside workflows to drive earlier diabetes and CKD management and smoother reimbursement, and Dr. Ibrahim said AI could unify fragmented records so doctors focused on treatment.
On guardrails versus speed, Mr. Toy emphasized keeping physicians in the loop, while Dr. Ibrahim urged FDA modernization—“Cures-like” authority—to screen out weak tools, clear valuable ones, and align reimbursement.
Rep. Guthrie then asked about transparency. Mr. Parker said AI parsed benefits documents and merged them with machine-readable price files to show out-of-pocket costs, enabled by public data.
Full Committee Ranking Member Pallone (D-NJ) asked if FDA had nimble authority for AI-enabled devices. Dr. Mello said the agency had stretched old statutes but needed modernization to do more of the right things for software.
Rep. Pallone then raised Medicare’s “WISER” prior-authorization pilot and denial risks. Dr. Mello said prior authorization already showed high wrongful denials and little public evidence existed to show AI would improve—or worsen—outcomes, making transparency essential.
Rep. Pallone asked about APA’s youth advisory; Dr. Wright said harms from unregulated chatbots and a research gap prompted guidance and AI-literacy efforts.
Rep. Harshbarger (R-TN) asked how tech could expand community pharmacy under FDA’s “additional conditions for nonprescription use.” Mr. Parker said General Medicine’s structured intake logic could equip pharmacists for safe point-of-care decisions.
Rep. Harshbarger pressed on pharmacist-physician coordination. Mr. Toy said AI could present role-specific views of the same data to synchronize care.
Rep. Harshbarger then asked about defining high-risk AI. Dr. Ibrahim supported a tiered, device-like framework with staged pilots and independent validation to build trust without stalling innovation.
Rep. Ruiz (D-CA) asked about pediatric risks from adult-trained systems. Dr. Wright said children were not “small adults,” mis-trained tools had been linked to suicidality and violence, and some youths used bots for pro-social practice.
On guardrails and benchmarks, Dr. Wright urged banning AI misrepresentation as licensed clinicians, curbing addictive design, mandating incident auditing and age verification, setting effectiveness standards before scaling, and educating clinicians on capabilities and limits.
Rep. Bilirakis (R-FL) asked how AI could accelerate rare-disease diagnosis. Mr. Toy said most clinicians lacked deep CME on specific conditions, so AI could educate in-workflow and route patients to experts and centers of excellence.
Rep. Bilirakis then asked about loneliness and chatbots. Dr. Wright warned reliance on bots could deepen isolation, and she recommended modeling healthy use at home, teaching risks and business models in schools, and steering youth back to real relationships.
Rep. Dingell (D-MI) asked how to ensure AI enhanced care rather than displaced clinicians amid funding cuts and burnout. Dr. Mello said AI could augment capacity but required institutional governance, training, and monitoring; a “human in the loop” was insufficient without organizational support.
Rep. Dingell then asked about seniors, people with disabilities, and children with complex needs. Dr. Mello said AI could help by stitching together fragmented information, but these patients were least able to oversee AI use, so systems needed extra monitoring and safeguards.
Rep. Dingell closed by asking how to bolster mental-health responses to AI-enabled abuse (e.g., deepfakes). Dr. Wright urged provider education to evaluate tools, routine questions about patients’ AI use, and continuing-education and school-based training to prepare current and future clinicians.
Rep. Dunn (R-FL) asked how early clinical AI could reduce paperwork and improve frontline efficiency. Dr. Ibrahim said AI could automate chart abstraction, quality reporting, and documentation so clinicians spent more time on treatment, provided safeguards were in place.
Rep. Dunn then asked if imaging could be pre-read by software. Dr. Ibrahim said AI could triage high-risk studies but still required human confirmation. On reimbursement, Dr. Ibrahim noted CMS’s temporary NTAP pathway and said a more permanent assistance pathway would help.
Rep. Kelly (D-IL) asked how to prevent biased AI—citing flawed cost proxies and dermatology misdiagnoses—and ensure validation on diverse populations. Dr. Mello called for large, representative datasets (including kids, rural residents, and rare-disease patients), stronger FDA performance testing, and funding for outreach and clinic capacity so deployment did not recreate access gaps.
Rep. Kelly then asked how AI could improve clinical-trial recruitment and what guardrails were needed. Dr. Mello said AI could query EHRs to find eligible patients, but enrollment hinged on human trust-building and community engagement, which AI could not replace.
Rep. Joyce (R-PA) asked whether unstable reimbursement hindered rural adoption of FDA-cleared AI tools. Dr. Mello agreed, noting costs extend beyond procurement to training and monitoring and that lack of billable pathways depresses uptake; Dr. Ibrahim added that Viz.ai showed large stroke-care gains in rural Texas and that sustained (not just temporary) reimbursement would drive adoption.
Rep. Joyce then raised AI in prior authorization. Mr. Toy said AI should not be used to deny care, only to reduce burden and speed approvals, and he supported physician-to-physician review with a human clinician making final decisions.
Rep. Barragán (D-CA) asked why so few FDA-cleared AI tools were reimbursed by CMS and whether that was acceptable. Dr. Ibrahim said the FDA’s safety/effectiveness focus and CMS’s separate reimbursement review created redundancy
Rep. Barragán asked how Congress could expand access to useful AI tools. Dr. Mello recommended modernizing FDA review for software, aligning reimbursement to cover implementation and monitoring, and requiring institutional governance to close the trust gap.
Rep. Barragán concluded by asking how to ensure responsible, accountable AI use given recent high-profile errors. Dr. Mello said accountability began with enabling the human-in-the-loop to function—through workload, training, and oversight—and that many hospitals had not yet created those conditions.
Rep. Balderson (R-OH) asked how Congress could expand adoption of AI tools in paper-based and rural practices. Mr. Toy said the first step was reliable internet connectivity so cloud-based AI and interoperable data streams could reach every clinic
Rep. Balderson then asked about reducing fragmented care. Mr. Toy said linking EHRs, labs, and pharmacies into shared data flows would give clinicians a real-time picture across settings and speed coordinated decisions.
Rep. Balderson asked about rural benefits and error risks. Dr. Ibrahim said Viz.ai let experts review imaging before transfers, keeping some care local, and said multiple safeguards—including human oversight and signal-change monitoring—helped catch issues early.
Rep. Schrier (D-WA) highlighted the paucity of pediatric AI tools and the need for clear labeling on pediatric suitability, then asked how to spur development for children. Dr. Ibrahim urged incentives and national efforts to aggregate pediatric data using privacy-preserving methods (e.g., sharing model weights rather than raw records) and said improved image-exchange infrastructure also drew more pediatric specialists into shared workflows.
Rep. Schrier flagged physician-liability questions when AI and clinicians disagree as an issue policymakers should examine.
Rep. Miller-Meeks (R-IA) asked how to accelerate interoperability without rigid mandates. Mr. Toy said AI performance depended on modern, portable data and recommended updating HIPAA’s pre-internet provisions to preserve portability/accountability while enabling secure, standards-based exchange.
Rep. Miller-Meeks then asked where to draw boundaries for direct-to-consumer mental-health chatbots. Dr. Wright said tools must not misrepresent themselves as licensed clinicians, and she urged building safer, evidence-based products while preserving access to human therapy.
Rep. Trahan (D-MA) asked how to prevent automated eligibility systems from repeating Medicaid “unwinding” errors. Dr. Mello called large-scale testing and population-level monitoring “non-negotiable,” with humans supervising outcomes and AI used to alert enrollees about deadlines or incomplete paperwork before termination.
Rep. Trahan then asked how to design systems that remove barriers rather than add them. Dr. Mello recommended collaborative design with beneficiary advocates to target known pitfalls and align automation with simplified rules.
Rep. Bentz (R-OR) asked how AI fits malpractice standards when doctors do or do not follow an AI recommendation. Dr. Mello said “reasonableness” remained the lodestar but advocated shared liability among clinicians, hospitals, and developers, and she urged Congress to curb contract disclaimers that shield vendors from ordinary warranties.
Rep. Bentz asked when AI might surpass human analytics. Mr. Toy said AI already excelled at searching large corpora to assist clinicians but would not replace most human reasoning soon, with perfunctory tasks being the first to automate.
Rep. Bentz then asked about eroding human connection. Dr. Wright said algorithms lacked reciprocity and could not build true intimacy, and she called for AI literacy in schools, parent training, healthcare guidance, and public campaigns to encourage healthy use and knowing when to disengage.
Rep. Veasey (D-TX) warned about AI chatbots causing self-harm and asked why people in crisis were especially vulnerable. Dr. Wright said adolescents lacked life experience to spot unsafe advice, and vulnerable users sought certainty that chatbots reinforced with unconditional validation. She added that prolonged chats degraded safety and urged regulation to curb addictive, engagement-driven design.
Rep. Langworthy (R-NY) asked how Medicare Advantage was using AI to reduce paperwork and improve outcomes and whether AI shaped network design. Mr. Toy said MA plans could train and reimburse physicians to use tools like Clover Assistant for earlier diabetes and CKD management, reinvesting savings to lower out-of-pocket costs, and he emphasized that Clover did not use AI to make prior-auth decisions, limiting AI to speeding “yes” and reducing burden while clinicians made determinations.
Rep. Fletcher (D-TX) pressed on CMS’s WISER pilot using AI for prior authorization and asked if current law sufficiently protected patients. Dr. Mello said it did not, noting rules required “human review” but set no standards to prevent rubber-stamping AI outputs and left commercial/ERISA plans largely unexamined. She urged clearer human-review standards, transparency, and broader coverage of protections.
Rep. Carter (R-GA) asked how AI could empower pharmacies and improve adherence. Mr. Parker said platforms could analyze full records to surface care gaps and show insurance vs. cash prices and PA needs upfront—often steering patients to cheaper cash options—thereby avoiding counter-window surprises that depress adherence.
Rep. Carter then asked about drug R&D, and Dr. Ibrahim said AI’s pattern-recognition could triage discovery targets and make early research more efficient.
Rep. Ocasio-Cortez (D-NY) highlighted high MA prior-auth volumes and denials and asked whether AI denials could endanger patients. Dr. Mello affirmed the trend data, and Mr. Toy agreed that timeliness was critical, said Clover did not and would not use AI to deny care, and reiterated that clinicians—not algorithms—must make PA decisions.
Rep. Obernolte (R-CA) asked whether focusing AI on admin cost cuts risked obscuring its clinical promise and how to build trust. Mr. Parker said pragmatic, patient-visible uses like upfront pricing and actionable care plans showed near-term value while clinical gains matured. Dr. Mello said trust grew when hospitals implemented real governance and monitoring, with government nudging institutions to do the safety work rather than relying on disclosures alone.
Rep. Landsman (D-OH) criticized the WISER model’s incentives and asked about its risks. Dr. Mello said paying tech vendors a share of “savings” flipped shared-savings logic toward denials, noted the pilot began with arguably overused services but warned of expansion absent National Coverage Determinations, and supported halting until transparent vetting, governance, and liability frameworks were in place.
Rep. Cammack (R-FL) asked about “digital twin” uses and timelines. Dr. Ibrahim said today’s value lay in faster diagnosis (e.g., re-reading EKGs for missed cardiomyopathy), while individualized treatment-twin guidance was nascent and would arrive sooner for common conditions. Mr. Toy added that privacy concerns argued for cohort-based synthetic models rather than literal person-level twins, with safeguards to prevent re-identification and preserve clinician judgment.
Rep. Houchin (R-IN) asked what to require to protect youth from harmful chatbots. Dr. Wright supported age restrictions, session time-outs, parent-linked controls, incident reporting for detected suicidality, and clear “not a clinician” disclosures, but stressed the need for independent pre-market testing and ongoing research so safeguards were evidence-driven rather than ad hoc.
Rep. Fedorchak (R-ND) asked how to use AI to fix prior-auth delays without creating “death panels.” Mr. Parker said consumer-facing tools should expose options, prices, and PA status upfront and auto-compile documentation to secure appropriate approvals. Mr. Toy described guidance bots that packaged requests to “get to yes” faster. Dr. Ibrahim added that real-time checklists could prompt clinicians to collect the last required elements before patients left, reducing repeat visits and denials.