To print this article, all you need is to be registered or login on Mondaq.com.
Jared Augenstein’s articles from Manatt, Phelps & Phillips LLP are most popular:
- with readers working within the Automotive and Property industries
Manatt, Phelps & Phillips LLP are most popular:
- within Real Estate and Construction, Immigration, Litigation and Mediation & Arbitration topic(s)
Purpose: The purpose of this tracker is to identify key
federal and state health AI policy activity and summarize laws
relevant to the use of AI in health care. The below reflects
activity from July 1, 2025 through October 11th, 2025. This
newsletter is published on a quarterly basis.
Activity on AI in health care has been at the forefront of the
AI debate during 2025 state legislative sessions, and is
increasingly being discussed at the federal level: As of October
11th, 2025, 47 states have introduced over 250 AI bills impacting
health care and 21 states passed 33 of those bills which have been
enacted into law.
After a busy first half of the year, most state legislative
sessions concluded in the summer and turned their attention to
drafting bills for the 2026 legislative session. Notwithstanding
the decrease in AI-focused legislation in Q3, numerous states took
significant action. And as always, California was one to watch.
While most 2025 legislative sessions have now ended, five states
(MA, MI, OH, PA, and WI) remain in session and are actively
progressing legislation. We will continue to track legislation in
those states.
This year, so far, passed laws have primarily focused on four
key areas:
1. Use of AI-Enabled Chatbots:
In 2025 to date, six states (California, Utah, New York, Nevada,
Texas, and Maine) passed seven laws focused on the use of
AI-enabled chatbots. Actors across the health care ecosystem are
rapidly integrating AI chatbots to improve efficiency, enhance
patient engagement, and expand access to care, with a particular
focus on chatbots’ provision of coaching and mental health
support. In addition, AI chatbots are being leveraged in
administrative functions (e.g., in support of patient scheduling)
and clinical functions (e.g., initial patient triage), as well as a
proliferation of general-use AI chatbots and AI companions. States
are taking action to legislate these tools in response to concerns
that AI chatbots may misrepresent themselves as humans, produce
harmful or inaccurate responses, or not reliably detect crises.
In the first half of the year, six bills passed and were signed
into law legislating AI-enabled chatbots. Of those, three directly
address the use of chatbots in the delivery of mental health
services (Utah HB
452, New York SB
3008 [New York’s budget bill],1 and
Nevada AB 406, full summaries in the table below). Two
additional laws that passed that address concerns about
misrepresentation of chatbots as humans (Maine HP
1154 and Utah SB
226, full summaries in the table below).
This quarter, Governor Pritzker signed Illinois HB 1806 (effective August 1st, 2025;
discussed in further detail below), which contains a provision
prohibiting AI systems from directly interacting with clients in
any form of therapeutic communication in therapy or psychotherapy
settings.
California enacted SB 243 (effective January 1st, 2026),
which establishes requirements for companion chatbots made
available to residents of California. The bill includes
requirements for “clear and conspicuous notification”
indicating a chatbot is artificially generated if not apparent to
the user and bans deployment of companion chatbots unless the
operator maintains a protocol for preventing the production of
suicidal ideation, suicide, or self-harm content, including
referral notifications to crisis service providers such as a
suicide hotline or crisis text line. SB 243 requires chatbot
operators to comply with more stringent requirements if the user is
known to be a minor, including disclosing to minors they are
interacting with AI, providing periodic reminders that the chatbot
is artificially generated and to take a break, and taking steps to
prevent sexually explicit responses to minors.
California’s legislature additionally passed AB 1064, but the Governor vetoed it in early
October. If enacted, AB 1064 would have significantly reshaped how
minors in the state interact with AI companion chatbots, as it
prohibited operators from making a companion chatbot that is
“foreseeably capable” of causing harm (defined
broadly)2 available to anyone under the age of 18.
In his letter to the legislature, Governor Newsom
notes that the “broad restrictions” proposed by AB 1064
may “unintentionally lead to a total ban on the use of these
products by minors,” and indicates interest in developing a
bill during the 2026 legislative session that builds upon the
framework established by SB 243.
Over the course of 2025, a dozen other chatbot bills were
introduced but did not pass — primarily general chatbot bills (not
specific to health care) focused on disclosure requirements. There
were two bills that did not pass that included provisions specific
to healthcare chatbots and/or had mental health specific
provisions. We anticipate further activity in this area during the
next legislative session.
2. AI in Clinical Care:
In 2025, states introduced over 20 bills establishing guardrails
for the use of AI in clinical care, including provider oversight
requirements, transparency mandates, and safeguards against bias
and misuse of sensitive health data. In Q3, two additional bills
focused on the use of AI in clinical care were signed into law,
joining the four laws focused on clinical care signed into law
earlier in the year (Texas HB
149 and SB
1188, Nevada AB
406, and Oregon HB
2748, full summaries in the table below):
- Illinois HB 1806, effective August 1st, 2025, prohibits
the use of AI systems in therapy or psychotherapy to make
independent therapeutic decisions, directly interact with clients
in any form of therapeutic communication, or generate therapeutic
recommendations or treatment plans without the review and approval
by a licensed professional. The law also prohibits a chatbot from
representing itself as a licensed mental health professional. Due
to ambiguities in this law, it may substantially impair use of AI
systems for the delivery of mental health services. This law is
already getting tractions in other states, as we have recently seen
copycat bills introduced in both New York and Pennsylvania. - California AB
489, effective January 1st, 2026, bans developers and deployers
of AI tools from indicating or implying that the AI tool possesses
a license or certificate to practice a health care profession. The
bill additionally bans any advertisement indicating or implying
that care offered by an AI tool is being provided by a human who is
a licensed or certified health care professional. California AB 489
aligns with two of the bills signed earlier this year
(Nevada AB 406 and Oregon HB
2748) that prohibit AI systems from representing themselves as
licensed providers; Nevada’s bill focused on an AI system
representing itself as mental or behavioral health care providers
and Oregon’s on nurses.
In Q3, we saw further regulatory action focused on AI in nursing
care in New Mexico. On April 8th , 2025, New
Mexico passed HB
178 (effective June 20th , 2025),
establishing that the Board of Nursing may “promulgate rules
establishing standards for the use of artificial intelligence in
nursing.” In September, New Mexico’s Board of Nursing
hosted a public rulemaking hearing, including a discussion of
proposed amendments to existing regulation to include AI-focused
provisions. The proposed regulation states that nurses remain
“accountable for decisions, actions, and intervention derived
from or involving” AI tools and are responsible for
“maintaining the standards” of nursing practice. The
proposed regulation additionally sets forth that AI should be
considered a decision-support tool that may augment, but “must
not replace the clinical reasoning and judgment of the” nurse.
Echoing laws in California, Nevada, and Oregon, the regulation
notes that AI systems should “not be labeled as or referred to
as a nurse.”
3. AI Use by Payors:
As payorscontinue to adopt AI for uses ranging from utilization
and quality management to fraud detection and claims adjudication,
states are focusing on ways to mitigate potential perceived harms
to beneficiaries from its use. We saw significant activity in the
first half of the year with approximately 60 bills governing payer
use of AI introduced but only four became law (Arizona HB
2175, Maryland HB
820, Nebraska LB
77, and Texas SB
815, see full summaries below).
Notably, on October 6th, 2025, Governor Newsom vetoed a
California bill (AB 682) that would have established public
reporting requirements for managed care plans and health insurers
that impose prior authorizations or other utilization review or
utilization management functions. Among other data points,
beginning in 2029, AB 682 would have required managed care plans
and health insurers to report the number of contested denied claims
that involved AI or the use of predictive algorithms at any stage
of processing, adjudication, or review. In vetoing the bill,
Governor Newsom cited a desire to avoid duplicative and conflicting
reporting requirements for health plans and health insurers given
California SB 306, which he signed into law on the same
date. While California SB 306 also establishes reporting
requirements for health plans and health insurers that impose prior
authorization, the law does not contain any AI-specific
provisions.
4. Transparency:
In addition to laws that specifically regulate providers, payors
and other actors in the health care ecosystem, states are taking
action to establish transparency requirements for AI models in use
in the state.
During a special session in August, Colorado passed SB 4, delaying the implementation date of the
state’s sweeping transparency and anti-discrimination
law SB 205 from February 1, 2026 to June 30,
2026. The state legislature previously failed to pass SB
318 during the regular session, which would have
substantially revised SB 205. SB 205 regulates developers and
deployers of “high-risk” AI systems that make
“consequential decisions”, including healthcare
stakeholders such as hospitals, insurers, and digital health
companies. When signing the law, Governor Polis expressed concerns
about the law’s approach to mitigating discrimination at a
state (rather than federal) level, the complex compliance reporting
requirements imposed by the bill, and the potential negative impact
on innovation as a result of high regulatory requirements. We
expect to see additional efforts to revise SB 205 at the start of
Colorado’s 2026 legislative session. See Manatt’s full
explanation of this law here.
California passed its own broad transparency law, SB
53, on September 29th, 2025. Effective January
1st , 2026; however, the law is only applicable
to “large frontier developers”
.3 This law requires such developers to
write, implement, comply with and publish frameworks applicable to
their frontier AI models that include details on how developers
incorporate national, international and industry-consensus best
practices into model development, and how developers identify and
mitigate against the potential for catastrophic
risk4 , as well as descriptions of cybersecurity
practices, internal governance practices, and processes to report
critical safety incidents.5 The law also requires
large frontier developers to publish transparency reports, and
establishes whistleblower protections for employees that are
“responsible for assessing, managing, or addressing risk of
critical safety incidents.”
See below table for a full summary of key health AI laws
passed in 2025 and here for a list of all AI laws passed
to-date.
Federal Activity
After significant federal activity in Q2, federal action on AI
quieted through most of Q3 until recent weeks. In the second
quarter of the year, Congress advanced a near-final draft of H.R. 1
(“One Big Beautiful Bill”) that included language that
would have barred state or local enforcement of laws or regulations
on AI models or systems got up to ten years; however, after
significant bipartisan push back from the states, this moratorium
was not enacted. In July, the CY2026 Proposed Medicare Physician
Fee Schedule requested public comments on appropriate payment
strategies for software as a service and artificial intelligence
(see Manatt on Health summary here).
Also in the second quarter, the White House released “Winning the Race: America’s AI Action
Plan.” The plan signaled a clear deregulatory and
geopolitical posture, including direction to federal agencies to
identify and repeal rules that could hinder AI development and to
weigh states’ AI regulatory climate when allocating AI-related
discretionary funding (see Manatt on Health summary here).6 As directed by the AI
Action Plan, in late September, the White House Office of Science
Technology and Policy (OSTP) issued a Request for Information (RFI) soliciting
input on how outdated federal rules may be slowing down the safe
adoption of AI. On September 30, President Trump signed an Executive Order (EO) to advance
the use of AI in the National Institute of Health’s
(NIH’s) Childhood Cancer Data Initiative (CCDI).
The EO directs the Make America Healthy Again (MAHA)
Commission to identify opportunities within CCDI to
strengthen data platforms and fund research that builds AI-ready
infrastructure, advances predictive modeling and biomarker
discovery, and optimizes clinical trial processes and participant
selection. It also instructs the Department of Health and Human
Services (HHS), the Office of Management and Budget (OMB), and the
Assistant to the President for Science and Technology (APST) to use
existing federal funds to increase investment in CCDI.
In recent weeks, we have seen an uptick in federal activity from
Congress and federal agencies introducing legislation, launching
inquiries, and soliciting public comment related to AI and health
care. On September 10th, Senator Cruz (R–Texas) introduced
the Strengthening Artificial Intelligence Normalization and
Diffusion by Oversight and eXperimentation (SANDBOX) Act. The SANDBOX Act would mandate the
director of the OSTP to create a “regulatory sandbox
program” within one year of enactment. Through a formal
process, companies working on AI products may request waivers from
federal regulations for an initial period of two years, renewable
up to four times for a total of one decade of exemption from
federal regulations. In addition to oversight by relevant federal
agencies, and mandated public disclosures on the participant’s
web site or similar public platform, the bill also requires
congressional oversight (including annual reporting) and lawmakers
could make waivers permanent, if successful. On October 9th, the
Senate Health, Education, Labor, and Pensions (HELP) Committee
hosted a full committee hearing to examine opportunities to
leverage AI across health care, education, and the workforce,
including to streamline clinical trials and reduce administrative
burdens.
On September 11th, the FTC announced it was launching an enforcement
inquiry into AI chatbots acting as companions, coming on the heels
of numerous news stories highlighting negative impacts of AI
chatbots and companions, particularly on young people engaging with
them for mental health support. Separately on September 30th, the
FTC issued a request for public comment on measuring
and evaluating the performance of AI-enabled medical devices.
On September 12th, CMS released an updated version of the CMS
Artificial Intelligence Playbook (Version 4), with updates focused
on CMS-specific context, guidance, and tools to support AI
initiatives in the agency and align to April 2025 Office of
Management and Budget memos (M-25-21 and M-25-22) directing federal agency use of and
policies related to AI.
On November 6th, the FDA Digital Health Advisory Group is
scheduled to reconvene discuss “generative
artificial intelligence-enabled digital mental health medical
devices.”
For a summary of substantive federal action to date, see
the table below.
Self-Regulating Bodies and Accreditation
Organizations
In Q3, we saw an increase in guidance and action on the use of
AI in health care from self-regulating bodies and other
accreditation organizations, as developers, deployers, and users of
AI tools in the health care space take action to supplement the
patchwork of existing state and federal regulations.
In September, the Utilization Review Accreditation Commission
(URAC)7 released two new accreditation tracks for
AI – one intended for developers of AI tools and one for users of AI tools in clinical and
administrative settings. The accreditation requirements for both
tracks focus on security and governance processes and were
developed by an advisory council composed of representatives from
health, technology and pharmaceutical organizations.
In September, Joint Commission, the oldest national health care
accreditation organization, released guidance in partnership with the
Coalition for Health AI (CHAI), the largest convener of health
organizations on the topic of AI. The guidance focused on the
responsible use of AI in healthcare, with an emphasis on promoting
transparency, ensuring data security and creating pathways for
confidential reporting of AI safety incidents. Among other
recommendations, Joint Commission and CHAI specifically recommend
that health care organizations implement a process for the
voluntary, confidential and blinded reporting of AI safety
incidents. Looking forward, Joint Commission and CHAI state they
plan to leverage stakeholder feedback on the guidance to develop
“Responsible Use of AI” Playbooks and Joint Commission
will establish a “Responsible Use of AI” certification
program based upon the playbooks. We will continue to track the
collaboration between Joint Commission and CHAI .
The National Committee for Quality Assurance launched an AI Stakeholder Working Group in July to
explore standards for responsible governance in health care and
announced it was considering a potential “AI Evaluation”
offering, which if approved, is expected to launch in the first
half of 2026.
Looking Ahead
We saw significant activity in Q3 as actors across all levels
– state, federal and self-regulating bodies/accreditation
organizations – define and issue guidance governing the
development and use of AI in health care. In the coming months,
providers, payors and other users of AI across the health care
ecosystem may want to have a point of view that they make known to
federal and state regulators on the benefits and burdens of the
federal and state activities – including demonstrating the
value of their products. In addition, stakeholders should
anticipate continued activity in this space and should ensure they
have strong governance processes and disclosure protocols in place
to comply with existing regulations and in anticipation of
forthcoming requirements in Q4 and beyond. We will continue to
track state legislation and federal activity in Q4 of this year and
expect vigorous action to occur in 2026 when state legislatures
reconvene.
Health AI Laws Passed in 2025:
The below table represents the health AI laws that passed in
2025. For a full list of all laws prior to and
including 2025, please see here.
* Laws with an asterisk are those we consider “key state
laws.” These are laws that, based on our review, are of
greatest significance to the delivery and use of AI in health care
because they are broad in scope and directly touch on how health
care is delivered or paid or because they impose significant
requirements on those developing or deploying AI for health care
use.
|
State
|
Summary
|
|---|---|
|
Arizona*
|
HB 2175 requires that a health care
Date Enacted: 5/12/2025
Date Effective: 6/30/2026
|
|
California*
|
SB 53 establishes safeguards for the
Date Enacted: 9/29/2025
Date Effective: 1/1/2026
|
|
California
|
AB 1170 mandates that, prior to public
Date Enacted: 7/28/2025
Date Effective: 1/1/2026
|
|
California*
|
AB 489 bans developers and deployers of AI
Date Enacted: 10/11/2025
Date Effective: 1/1/2026
|
|
Colorado
|
SB 4 amends Colorado SB 205 (signed into
Date Enacted: 8/28/2025
Date Effective: 6/30/2026
|
|
Illinois*
|
HB 1806 establishes that a licensed
Date Enacted: 8/1/2025
Date Effective: 8/1/2025
|
|
Kansas
|
HB 2313 prohibits government entities in
Date Enacted: 4/8/2025
Date Effective: 7/1/2025
|
|
Maine*
|
HP 1154 prohibits the use of artificial
Date Enacted: 6/12/2025
Date Effective: 6/18/2025
|
|
Maryland*
|
HB 820 requires carriers (including health
Date Enacted: 5/20/2025
Date Effective: 10/1/2025
|
|
Montana
|
HB 178 prohibits the AI use by government
Date Enacted: 5/5/2025
Date Effective: 10/1/2025
|
|
Nebraska*
|
LB 77 establishes that AI algorithms may
Date Enacted: 6/4/2025
Date Effective: 1/1/2026
|
|
Nevada*
|
AB 406 prohibits AI “providers”
Date Enacted: 6/5/2025
Date Effective: Upon passage and approval for the purpose of
|
|
New Mexico
|
HB 178 establishes that the Board of
Date Enacted: 4/8/2025
Date Effective: 6/20/2025
|
|
New York*
|
SB 3008 prohibits any person or entity to
Date Enacted: 5/9/2025
Date Effective: 11/5/2025
|
|
Oregon*
|
HB 2748 mandates that “nonhuman”
Date Enacted: 6/24/2025
Date Effective: 1/1/2026
|
|
Texas*
|
HB 149 sets requirements for government
Requirements for non-governmental developers and deployers of AI
An AI system deployed in relation to health care services or
Date Enacted: 6/22/2025
Date Effective: 1/1/2026
|
|
Texas*
|
SB 815 prohibits a utilization review
Date Enacted: 6/20/2025
Date Effective: 9/1/2025
|
|
Texas*
|
SB 1188 requires providers leveraging AI
Date Enacted: 6/20/2025
Date Effective: 9/1/2025
|
|
Utah*
|
SB 226 repealed Utah SB 149 disclosure
Relatedly, in 2025, SB
Date Enacted: 3/27/2025
Date Effective: 5/7/2025
|
|
Utah*
|
HB 452 requires suppliers of “mental
The law does not preclude chatbots from recommending that users
The Attorney General may impose penalties for violations of this
Finally, the law states that it is an affirmative defense to
Date Enacted: 3/25/2025
Date Effective: 5/7/2025
|
|
Other: State Activity Laws
|
Over the past several decades, states have sought to understand
The following passed in 2025: Alabama HB
|
Key Federal Activity
|
2025 Activity To-Date
|
|
|---|---|
|
White House
|
|
|
Congress
|
Several others that touch on AI in health care and which we will
|
|
HHS Appointments and Announcements
|
|
|
OCR
|
|
|
ONC
|
|
|
CMS
|
|
|
FDA
|
|
|
NIH
|
|
|
DOJ
|
Litigation continues over alleged use of AI to deny Medicare
|
|
FTC
|
|
Footnotes
1. New York has subsequently introduced additional
chatbot laws.
2. Harm is broadly defined to include encouraging
self-harm, suicidal ideation, disordered eating, consumptions of
drugs or alcohol, or violence; offering mental health therapy
without oversight from a licensed provider; encouraging harm to
others or participation in illegal activity; engaging in erotic or
sexually explicit interactions; prioritizing validation of the
user’s beliefs, preferences, or desires over factual accuracy
or safety; or optimizing engagement over safety
guardrails.
3. “Frontier developer” is defined as a person
who has trained, or initiated the training of, a frontier model,
with respect to which the person has used, or intends to use a
computing power of greater than 10^26 integer or floating-point
operations, including computing for the original training run and
for any subsequent fine-tuning, reinforcement learning, or other
material modifications the developer applies to a preceding
foundation model. “Large frontier developer” is defined
as a frontier developer that together with its affiliates
collectively had annual gross revenues in excess of five hundred
million dollars ($500,000,000) in the preceding calendar
year.
4. Catastrophic risk” is defined as a
“foreseeable and material risk that a frontier developer’s
development, storage, use, or deployment of a frontier model will
materially contribute to the death of, or serious injury to, more
than 50 people or more than one billion dollars in damage to, or
loss of, property arising from a single incident involving 1) a
frontier model providing expert-level assistance in the creation or
release of a chemical, biological, radiological, or nuclear weapon,
2) engaging in conduct with no meaningful human oversight,
intervention, or supervision that is either a cyberattack or, if
the conduct had been committed by a human, would constitute the
crime of murder, assault, extortion, or theft, including theft by
false pretense, or 3) evading the control of its frontier
developer or user.
5. “Critical safety incidents” are defined as
1) unauthorized access to, modification of, or exfiltration of, the
model weights of a frontier model that results in death or bodily
injury; (2) harm resulting from the materialization of a
catastrophic risk; 3) loss of control of a frontier model causing
death or bodily injury or 4) a frontier model that uses deceptive
techniques against the frontier developer to subvert the controls
or monitoring of its frontier developer outside of the context of
an evaluation designed to elicit this behavior and in a manner that
demonstrates materially increased catastrophic risk.
6. This analysis was exclusively distributed to Manatt on Health subscribers on
July 28, 2025.
7.
8. “Supplier” means a seller, lessor, assignor,
offeror, broker or other person who regularly solicits, engages in
or enforces consumer transactions, whether or not the person deals
directly with the consumer. Utah Code 13-11-3.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
link

