New York Wants to Ban the AI That Outscores Doctors
S7263 would ban AI from answering medical and legal questions, protecting billable hours while the people who can’t afford doctors lose their only option.
TL;DR
New York’s S7263 would make AI companies liable for chatbots answering questions across 13+ licensed professions, while over 900,000 New Yorkers lack insurance and 92% of low-income legal problems go unaddressed.
On the NEJM Image Challenge, a multimodal diagnostic test taken by over 60,000 physicians, every large language model tested significantly outperformed the doctors. Every single one. At p < 0.001. In neuroradiology, Claude 3.5 scored 80.4%. First-year radiology fellows scored 71.4%. Junior faculty scored 51.8%.
New York State’s Internet and Technology Committee voted 6-0 to advance the bill restricting such chatbots.
What S7263 Actually Does
Senate Bill S7263, sponsored by Sen. Kristen Gonzalez (D-Queens, Working Families Party) and chair of the committee that approved it, would hold AI companies civilly liable for chatbots that provide “substantive responses” in 13+ licensed professions: medicine, law, dentistry, nursing, psychology, social work, engineering, architecture, pharmacy, optometry, podiatry, physical therapy, and veterinary medicine. The bill reached the Senate floor calendar on February 26, 2026. A full chamber vote is imminent.
Ask a chatbot whether your landlord can legally withhold your security deposit? Unauthorized practice of law. Ask whether your symptoms match strep or mono? Unauthorized practice of medicine. Ask whether a wall in your renovation is load-bearing? Unauthorized practice of engineering.
The liability trap makes it worse. The bill explicitly states that proprietors “may not waive or disclaim this liability by notifying consumers that they are interacting with a non-human chatbot system.” The biggest disclaimer in the world won’t save you. You’re liable. The targets: OpenAI, Anthropic, xAI, Google, any company deploying a chatbot in New York.
Gonzalez frames the bill as public protection. But her own description of the broader legislative package says something different: it tackles “the urgent need to protect the workforce from their companies’ use of AI.” She says public safety. Her press release says workforce protection. Pick one.
The Evidence the Bill Ignores
The bill’s justification cites the American Psychological Association’s warning that chatbot therapists “could drive vulnerable people to harm themselves or others.” One hypothetical warning to the FTC.
The peer-reviewed data goes the other direction. A meta-analysis of 83 studies found no statistically significant difference between AI and non-expert physicians in overall diagnostic accuracy. On the NEJM Image Challenge, LLMs didn’t just match doctors. They blew past them. In neuroradiology, Claude 3.5 outperformed junior faculty by nearly 30 percentage points. The gap between 89% and 46.7% is the difference between a correct diagnosis and months of suffering through misdiagnosis.
Reason noted that Gonzalez “conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.”
The professionals themselves are already on board. The AMA’s 2026 survey found 81% of physicians now use AI professionally, double the 38% rate in 2023. AMA VP of Strategic Partnerships John Whyte said “AI has quickly become part of everyday medical practice.” The AMA created a Center for Digital Health and AI. Their position is that AI is a tool for physicians. They are not calling for bans on patient-facing AI.
Over 250,000 Americans die per year from medical errors, the third leading cause of death in the United States, according to Johns Hopkins researchers. The current system is killing people. The professionals doing the work want AI. But S7263 takes it away from their patients.
Who This Really Protects
Mario Cilento, New York State President of the AFL-CIO, supported the package: “AI is not a replacement for human judgment or jobs.”
Note the last word.
Co-sponsor Sen. Michelle Hinchey of the Working Families Party introduced the Workforce Stabilization Act alongside S7263. That companion bill would require employers to apply for permission to incorporate AI into their businesses and pay a “worker displacement surcharge.” Permission. To use a tool.
Three of the bill’s four sponsors represent the Working Families Party. The working families they claim to serve can’t afford the professionals this bill protects.
McKinsey estimates roughly $1 trillion of the $4 trillion the US spends on healthcare annually goes to administrative and clerical work. One dollar out of every four, on paperwork. AI is already eating into that waste. S7263 says: keep the waste.
Taylor Barkley, director of public policy at the Abundance Institute, called the ban “shortsighted at best and protectionist at worst.” Billable hours are what this bill effectively protects. Incumbents using “safety” to block technology that is measurably safer than the human alternative.
The People Who Get Locked Out
National health spending per person rose from about $2,900 in 1990 to $14,570 in 2023, almost a 5x increase. The licensed professions S7263 shields are the ones whose costs have become inaccessible to working families. AI was starting to push back.
The pitch from CA and NY: We want entrepreneurs gone We want billionaires gone We want AI restricted We want legal / medical info gate kept But do not worry… Your health and legal access are totally safe with the same government that made both expensive, slow and inaccessible. Absolute clown show.
Over 900,000 New Yorkers lack health insurance. For them, a chatbot isn’t replacing a doctor visit. It’s the only thing close to a doctor they can afford to ask.
On the legal side, 92% of civil legal problems faced by low-income Americans receive inadequate or no legal help, according to the Legal Services Corporation. 74% of low-income households face at least one civil legal problem per year. LawDroid launched LawAnswers AI nationwide in September 2025 specifically to fill this gap. Under S7263, deploying an AI legal aid tool in New York would be a lawsuit waiting to happen.
They’re not calling their lawyer when they have a legal question. They’re Googling it, or they’re going without.
In January 2026, the New York Times published “Stop Worrying, and Let A.I. Help Save Your Life.” When even the Times is saying AI in medicine is a net positive, a blanket liability bill is fighting the evidence.
AI isn’t replacing your doctor if you have a doctor. It’s the first doctor for millions who don’t.
The Regulatory Pile-On
S7263 doesn’t exist in isolation. New York already enacted the RAISE Act in early 2026, a broader regulatory framework targeting “frontier models” with penalties starting at 5% of compute costs, rising to 15% for repeat violations. Bloomberg Law called it “the blueprint for AI regulation to come.” S7263 stacks on top.
California went through this with SB 1047. Newsom vetoed it. New York is running the same play, harder.
The legitimate concern is real: AI can hallucinate medical information. A chatbot that gives a wrong drug interaction is dangerous. But the answer isn’t blanket prohibition. It’s regulating the application: accuracy benchmarks, clear disclaimers, reporting mechanisms for harmful outputs. The FDA has already cleared hundreds of AI medical devices through an application-level regulatory process that works. Regulators fought remote medical advice as “unauthorized practice” for years before COVID forced adaptation. The results were overwhelmingly positive. S7263 is re-fighting a battle that was already lost.
Machine learning pioneer Andrew Ng warned that laws like these “could have a huge impact on whether entrepreneurs and startups are allowed to keep on innovating.” You could build a carefully validated diagnostic tool that outperforms every doctor in the room and still face civil liability in New York.
What You Can Do
New York is making healthcare inaccessible for a generation. Now Sen. Kristen Gonzalez wants to gatekeep the one tool that was giving it back.
S7263 is on the New York Senate floor calendar. The full chamber vote hasn’t happened yet. The Assembly companion bill, A6545, hasn’t cleared committee. There are still chokepoints.
The bill creates a private right of action. Once it passes, every AI company serving New Yorkers faces a litigation threat for answering a medical question. That’s the mechanism that makes this sticky, and that’s why killing it is urgent.
If you live in New York, find your state senator and tell them to vote no on S7263. One million uninsured New Yorkers. Ninety-two percent of low-income legal problems going unaddressed. The guilds want to keep it that way. Kill this bill before it has teeth.
Related Links
-
NY State Senate Bill 2025-S7263 (NY Senate)
-
New York's RAISE Act Is the Blueprint for AI Regulation to Come (Bloomberg Law)
Comments (0)
Sign in to join the conversation.