Site icon Data Matrixx

Manatt Health: Health AI Policy Tracker – New Technology

Manatt Health: Health AI Policy Tracker – New Technology


To print this article, all you need is to be registered or login on Mondaq.com.


Jared Augenstein’s articles from Manatt, Phelps & Phillips LLP are most popular:

  • with readers working within the Automotive and Property industries

Manatt, Phelps & Phillips LLP are most popular:

  • within Real Estate and Construction, Immigration, Litigation and Mediation & Arbitration topic(s)

Purpose: The purpose of this tracker is to identify key
federal and state health AI policy activity and summarize laws
relevant to the use of AI in health care. The below reflects
activity from July 1, 2025 through October 11th, 2025. This
newsletter is published on a quarterly basis. 

Activity on AI in health care has been at the forefront of the
AI debate during 2025 state legislative sessions, and is
increasingly being discussed at the federal level: As of October
11th, 2025, 47 states have introduced over 250 AI bills impacting
health care and 21 states passed 33 of those bills which have been
enacted into law.

After a busy first half of the year, most state legislative
sessions concluded in the summer and turned their attention to
drafting bills for the 2026 legislative session. Notwithstanding
the decrease in AI-focused legislation in Q3, numerous states took
significant action. And as always, California was one to watch.

While most 2025 legislative sessions have now ended, five states
(MA, MI, OH, PA, and WI) remain in session and are actively
progressing legislation. We will continue to track legislation in
those states.

This year, so far, passed laws have primarily focused on four
key areas:

1. Use of AI-Enabled Chatbots:

In 2025 to date, six states (California, Utah, New York, Nevada,
Texas, and Maine) passed seven laws focused on the use of
AI-enabled chatbots. Actors across the health care ecosystem are
rapidly integrating AI chatbots to improve efficiency, enhance
patient engagement, and expand access to care, with a particular
focus on chatbots’ provision of coaching and mental health
support. In addition, AI chatbots are being leveraged in
administrative functions (e.g., in support of patient scheduling)
and clinical functions (e.g., initial patient triage), as well as a
proliferation of general-use AI chatbots and AI companions. States
are taking action to legislate these tools in response to concerns
that AI chatbots may misrepresent themselves as humans, produce
harmful or inaccurate responses, or not reliably detect crises.

In the first half of the year, six bills passed and were signed
into law legislating AI-enabled chatbots. Of those, three directly
address the use of chatbots in the delivery of mental health
services (Utah HB
452, New York SB
3008 [New York’s budget bill], and
Nevada AB 406, full summaries in the table below). Two
additional laws that passed that address concerns about
misrepresentation of chatbots as humans (Maine HP
1154 and Utah SB
226, full summaries in the table below).

This quarter, Governor Pritzker signed Illinois HB 1806 (effective August 1st, 2025;
discussed in further detail below), which contains a provision
prohibiting AI systems from directly interacting with clients in
any form of therapeutic communication in therapy or psychotherapy
settings.

California enacted SB 243 (effective January 1st, 2026),
which establishes requirements for companion chatbots made
available to residents of California. The bill includes
requirements for “clear and conspicuous notification”
indicating a chatbot is artificially generated if not apparent to
the user and bans deployment of companion chatbots unless the
operator maintains a protocol for preventing the production of
suicidal ideation, suicide, or self-harm content, including
referral notifications to crisis service providers such as a
suicide hotline or crisis text line. SB 243 requires chatbot
operators to comply with more stringent requirements if the user is
known to be a minor, including disclosing to minors they are
interacting with AI, providing periodic reminders that the chatbot
is artificially generated and to take a break, and taking steps to
prevent sexually explicit responses to minors.

California’s legislature additionally passed AB 1064, but the Governor vetoed it in early
October. If enacted, AB 1064 would have significantly reshaped how
minors in the state interact with AI companion chatbots, as it
prohibited operators from making a companion chatbot that is
“foreseeably capable” of causing harm (defined
broadly) available to anyone under the age of 18.
In his letter to the legislature, Governor Newsom
notes that the “broad restrictions” proposed by AB 1064
may “unintentionally lead to a total ban on the use of these
products by minors,” and indicates interest in developing a
bill during the 2026 legislative session that builds upon the
framework established by SB 243. 

Over the course of 2025, a dozen other chatbot bills were
introduced but did not pass — primarily general chatbot bills (not
specific to health care) focused on disclosure requirements. There
were two bills that did not pass that included provisions specific
to healthcare chatbots and/or had mental health specific
provisions. We anticipate further activity in this area during the
next legislative session.

2. AI in Clinical Care:

In 2025, states introduced over 20 bills establishing guardrails
for the use of AI in clinical care, including provider oversight
requirements, transparency mandates, and safeguards against bias
and misuse of sensitive health data. In Q3, two additional bills
focused on the use of AI in clinical care were signed into law,
joining the four laws focused on clinical care signed into law
earlier in the year (Texas HB
149 and SB
1188, Nevada AB
406, and Oregon HB
2748, full summaries in the table below):

  • Illinois HB 1806, effective August 1st, 2025, prohibits
    the use of AI systems in therapy or psychotherapy to make
    independent therapeutic decisions, directly interact with clients
    in any form of therapeutic communication, or generate therapeutic
    recommendations or treatment plans without the review and approval
    by a licensed professional. The law also prohibits a chatbot from
    representing itself as a licensed mental health professional. Due
    to ambiguities in this law, it may substantially impair use of AI
    systems for the delivery of mental health services. This law is
    already getting tractions in other states, as we have recently seen
    copycat bills introduced in both New York and Pennsylvania.

  • California AB
    489, effective January 1st, 2026, bans developers and deployers
    of AI tools from indicating or implying that the AI tool possesses
    a license or certificate to practice a health care profession. The
    bill additionally bans any advertisement indicating or implying
    that care offered by an AI tool is being provided by a human who is
    a licensed or certified health care professional. California AB 489
    aligns with two of the bills signed earlier this year
    (Nevada AB 406 and Oregon HB
    2748) that prohibit AI systems from representing themselves as
    licensed providers; Nevada’s bill focused on an AI system
    representing itself as mental or behavioral health care providers
    and Oregon’s on nurses.

In Q3, we saw further regulatory action focused on AI in nursing
care in New Mexico. On April 8th  , 2025, New
Mexico passed HB
178 (effective June 20th  , 2025),
establishing that the Board of Nursing may “promulgate rules
establishing standards for the use of artificial intelligence in
nursing.” In September, New Mexico’s Board of Nursing
hosted a public rulemaking hearing, including a discussion of
proposed amendments to existing regulation to include AI-focused
provisions. The proposed regulation states that nurses remain
“accountable for decisions, actions, and intervention derived
from or involving” AI tools and are responsible for
“maintaining the standards” of nursing practice. The
proposed regulation additionally sets forth that AI should be
considered a decision-support tool that may augment, but “must
not replace the clinical reasoning and judgment of the” nurse.
Echoing laws in California, Nevada, and Oregon, the regulation
notes that AI systems should “not be labeled as or referred to
as a nurse.”

3. AI Use by Payors:

As payorscontinue to adopt AI for uses ranging from utilization
and quality management to fraud detection and claims adjudication,
states are focusing on ways to mitigate potential perceived harms
to beneficiaries from its use. We saw significant activity in the
first half of the year with approximately 60 bills governing payer
use of AI introduced but only four became law (Arizona HB
2175, Maryland HB
820, Nebraska LB
77, and Texas SB
815, see full summaries below).

Notably, on October 6th, 2025, Governor Newsom vetoed a
California bill (AB 682) that would have established public
reporting requirements for managed care plans and health insurers
that impose prior authorizations or other utilization review or
utilization management functions. Among other data points,
beginning in 2029, AB 682 would have required managed care plans
and health insurers to report the number of contested denied claims
that involved AI or the use of predictive algorithms at any stage
of processing, adjudication, or review. In vetoing the bill,
Governor Newsom cited a desire to avoid duplicative and conflicting
reporting requirements for health plans and health insurers given
California SB 306, which he signed into law on the same
date. While California SB 306 also establishes reporting
requirements for health plans and health insurers that impose prior
authorization, the law does not contain any AI-specific
provisions.

4. Transparency:

In addition to laws that specifically regulate providers, payors
and other actors in the health care ecosystem, states are taking
action to establish transparency requirements for AI models in use
in the state.

During a special session in August, Colorado passed SB 4, delaying the implementation date of the
state’s sweeping transparency and anti-discrimination
law SB 205 from February 1, 2026 to June 30,
2026. The state legislature previously failed to pass SB
318 during the regular session, which would have
substantially revised SB 205. SB 205 regulates developers and
deployers of “high-risk” AI systems that make
“consequential decisions”, including healthcare
stakeholders such as hospitals, insurers, and digital health
companies. When signing the law, Governor Polis expressed concerns
about the law’s approach to mitigating discrimination at a
state (rather than federal) level, the complex compliance reporting
requirements imposed by the bill, and the potential negative impact
on innovation as a result of high regulatory requirements. We
expect to see additional efforts to revise SB 205 at the start of
Colorado’s 2026 legislative session. See Manatt’s full
explanation of this law here.

California passed its own broad transparency law, SB
53, on September 29th, 2025. Effective January
1st  , 2026; however, the law is only applicable
to “large frontier developers” 
.  This law requires such developers to
write, implement, comply with and publish frameworks applicable to
their frontier AI models that include details on how developers
incorporate national, international and industry-consensus best
practices into model development, and how developers identify and
mitigate against the potential for catastrophic
risk , as well as descriptions of cybersecurity
practices, internal governance practices, and processes to report
critical safety incidents. The law also requires
large frontier developers to publish transparency reports, and
establishes whistleblower protections for employees that are
“responsible for assessing, managing, or addressing risk of
critical safety incidents.”

See below table for a full summary of key health AI laws
passed in 2025 and here for a list of all AI laws passed
to-date.

Federal Activity

After significant federal activity in Q2, federal action on AI
quieted through most of Q3 until recent weeks. In the second
quarter of the year, Congress advanced a near-final draft of H.R. 1
(“One Big Beautiful Bill”) that included language that
would have barred state or local enforcement of laws or regulations
on AI models or systems got up to ten years; however, after
significant bipartisan push back from the states, this moratorium
was not enacted. In July, the CY2026 Proposed Medicare Physician
Fee Schedule requested public comments on appropriate payment
strategies for software as a service and artificial intelligence
(see Manatt on Health summary here).

Also in the second quarter, the White House released “Winning the Race: America’s AI Action
Plan.” The plan signaled a clear deregulatory and
geopolitical posture, including direction to federal agencies to
identify and repeal rules that could hinder AI development and to
weigh states’ AI regulatory climate when allocating AI-related
discretionary funding (see Manatt on Health summary here). As directed by the AI
Action Plan, in late September, the White House Office of Science
Technology and Policy (OSTP) issued a Request for Information (RFI) soliciting
input on how outdated federal rules may be slowing down the safe
adoption of AI. On September 30, President Trump signed an Executive Order (EO) to advance
the use of AI in the National Institute of Health’s
(NIH’s) Childhood Cancer Data Initiative (CCDI).
The EO directs the Make America Healthy Again (MAHA)
Commission to identify opportunities within CCDI to
strengthen data platforms and fund research that builds AI-ready
infrastructure, advances predictive modeling and biomarker
discovery, and optimizes clinical trial processes and participant
selection. It also instructs the Department of Health and Human
Services (HHS), the Office of Management and Budget (OMB), and the
Assistant to the President for Science and Technology (APST) to use
existing federal funds to increase investment in CCDI. 

In recent weeks, we have seen an uptick in federal activity from
Congress and federal agencies introducing legislation, launching
inquiries, and soliciting public comment related to AI and health
care. On September 10th, Senator Cruz (R–Texas) introduced
the Strengthening Artificial Intelligence Normalization and
Diffusion by Oversight and eXperimentation (SANDBOX) Act. The SANDBOX Act would mandate the
director of the OSTP to create a “regulatory sandbox
program” within one year of enactment. Through a formal
process, companies working on AI products may request waivers from
federal regulations for an initial period of two years, renewable
up to four times for a total of one decade of exemption from
federal regulations. In addition to oversight by relevant federal
agencies, and mandated public disclosures on the participant’s
web site or similar public platform, the bill also requires
congressional oversight (including annual reporting) and lawmakers
could make waivers permanent, if successful. On October 9th, the
Senate Health, Education, Labor, and Pensions (HELP) Committee
hosted a full committee hearing to examine opportunities to
leverage AI across health care, education, and the workforce,
including to streamline clinical trials and reduce administrative
burdens.

On September 11th, the FTC announced it was launching an enforcement
inquiry into AI chatbots acting as companions, coming on the heels
of numerous news stories highlighting negative impacts of AI
chatbots and companions, particularly on young people engaging with
them for mental health support. Separately on September 30th, the
FTC issued a request for public comment on measuring
and evaluating the performance of AI-enabled medical devices.

On September 12th, CMS released an updated version of the CMS
Artificial Intelligence Playbook (Version 4), with updates focused
on CMS-specific context, guidance, and tools to support AI
initiatives in the agency and align to April 2025 Office of
Management and Budget memos (M-25-21 and M-25-22) directing federal agency use of and
policies related to AI.

On November 6th, the FDA Digital Health Advisory Group is
scheduled to reconvene discuss “generative
artificial intelligence-enabled digital mental health medical
devices.”

For a summary of substantive federal action to date, see
the table below.

Self-Regulating Bodies and Accreditation
Organizations

In Q3, we saw an increase in guidance and action on the use of
AI in health care from self-regulating bodies and other
accreditation organizations, as developers, deployers, and users of
AI tools in the health care space take action to supplement the
patchwork of existing state and federal regulations.

In September, the Utilization Review Accreditation Commission
(URAC) released two new accreditation tracks for
AI – one intended for developers of AI tools and one for users of AI tools in clinical and
administrative settings. The accreditation requirements for both
tracks focus on security and governance processes and were
developed by an advisory council composed of representatives from
health, technology and pharmaceutical organizations.

In September, Joint Commission, the oldest national health care
accreditation organization, released guidance in partnership with the
Coalition for Health AI (CHAI), the largest convener of health
organizations on the topic of AI. The guidance focused on the
responsible use of AI in healthcare, with an emphasis on promoting
transparency, ensuring data security and creating pathways for
confidential reporting of AI safety incidents. Among other
recommendations, Joint Commission and CHAI specifically recommend
that health care organizations implement a process for the
voluntary, confidential and blinded reporting of AI safety
incidents. Looking forward, Joint Commission and CHAI state they
plan to leverage stakeholder feedback on the guidance to develop
“Responsible Use of AI” Playbooks and Joint Commission
will establish a “Responsible Use of AI” certification
program based upon the playbooks. We will continue to track the
collaboration between Joint Commission and CHAI .

The National Committee for Quality Assurance launched an AI Stakeholder Working Group in July to
explore standards for responsible governance in health care and
announced it was considering a potential “AI Evaluation”
offering, which if approved, is expected to launch in the first
half of 2026.

Looking Ahead

We saw significant activity in Q3 as actors across all levels
– state, federal and self-regulating bodies/accreditation
organizations – define and issue guidance governing the
development and use of AI in health care. In the coming months,
providers, payors and other users of AI across the health care
ecosystem may want to have a point of view that they make known to
federal and state regulators on the benefits and burdens of the
federal and state activities – including demonstrating the
value of their products. In addition, stakeholders should
anticipate continued activity in this space and should ensure they
have strong governance processes and disclosure protocols in place
to comply with existing regulations and in anticipation of
forthcoming requirements in Q4 and beyond. We will continue to
track state legislation and federal activity in Q4 of this year and
expect vigorous action to occur in 2026 when state legislatures
reconvene.

Health AI Laws Passed in 2025:

The below table represents the health AI laws that passed in
2025. For a full list of all laws prior to and
including 2025, please see here.

* Laws with an asterisk are those we consider “key state
laws.” These are laws that, based on our review, are of
greatest significance to the delivery and use of AI in health care
because they are broad in scope and directly touch on how health
care is delivered or paid or because they impose significant
requirements on those developing or deploying AI for health care
use.














































State

Summary

Arizona*

HB 2175 requires that a health care
provider individually, exercising independent medical judgment,
review claims and prior authorization requests prior to an insurer
denying a claim or prior authorization. The law bans the sole use
of any other source to deny a claim or prior authorization.


Date Enacted: 5/12/2025


Date Effective: 6/30/2026

California*

SB 53 establishes safeguards for the
development of frontier AI models (defined as a foundation model
that was trained using a quantity of computing power greater than
10^26 integer). Sets requirements on “large frontier
developers” (defined as person who has trained, or initiated
the training of, a frontier model and that (together with its
affiliates) has annual revenues of at least $500 million in the
preceding calendar year). Requires large frontier developers to
write, implement, comply with, and publish a frontier AI framework
applicable to their models; this framework must include details on:
how the developer incorporates national and international standards
and industry-consensus best practices; how the developer defines
and assesses thresholds used to identify and assess whether a
frontier model has capabilities that could pose a catastrophic
risk; mitigations to address the potential for catastrophic
risks; revisiting and updating the frontier AI framework, including
criteria that triggers updates and how the developer determines if
frontier models are modified enough to require disclosures;
cybersecurity practices; processes to report critical safety
incidents; internal governance practices and assessment; and
management of catastrophic risk resulting from the internal use of
its frontier models. Requires annual updates to framework. Requires
large developers to publicly publish transparency reports,
including summaries of assessments of catastrophic risks from the
frontier model, prior to or concurrently with deploying a new or
substantially modified frontier model. Requires developers to
regularly send a summary of any assessment of catastrophic risk or
dangerous capabilities resulting from internal use of its
foundation frontier to the Office of Emergency Services. Requires
the Department of Technology to issue an annual report with
recommendations on needed updates to definitions and thresholds.
Establishes a state-led initiative, CalCompute, to support the
development and deployment of AI that is safe, equitable, and
sustainable. Establishes whistleblower protections for covered
employees, defined as employees “responsible for assessing,
managing, or addressing risk of critical safety
incidents.”


Date Enacted: 9/29/2025


Date Effective: 1/1/2026

California

AB 1170 mandates that, prior to public
release, developers of AI tools publish documentation on their
websites detailing the training data used in the development of the
system or service. This documentation must include information on
the datasets employed and their sources/owners; number of data
points in the dataset; a description of how the datasets further
the purpose of the AI system; timeframe during which data was
collected; a description of the types of data points; whether data
includes copyrighted content, personal information, or aggregate
consumer information; whether the datasets were purchased or
licensed by the developer; an explanation of any modifications made
to the datasets by the developer along with the purpose of those
modifications; and a statement indicating whether synthetic data
was used during development. Provides exemptions for generative AI
systems or services with the following purpose: 1) ensuring
security and integrity; 2) operation of aircraft in national
airspace; or 3) systems developed for national security, military,
or defense purposes made available only to a federal entity.


Date Enacted: 7/28/2025


Date Effective: 1/1/2026

California*

AB 489 bans developers and deployers of AI
systems, programs, devices, or technologies from using
“specified terms, letters, or phrases to indicate or imply the
possession of a license or certificate to practice a health care
profession” without actually having obtained the appropriate
license or certificate for that practice or program. Bans use of
terms, letters, and phrases in advertising of AI systems that
“indicates or implies” that the care offered by the AI
technology is being provided by a human who is a licensed or
certified health care professional.


Date Enacted: 10/11/2025


Date Effective: 1/1/2026

Colorado

SB 4 amends Colorado SB 205 (signed into
law in 2024) to delay the original effective date from February 1,
2026 to June 30, 2026.


Date Enacted: 8/28/2025


Date Effective: 6/30/2026

Illinois*

HB 1806 establishes that a licensed
professional (defined as individuals licensed to provide therapy or
psychotherapy services in the state) may use AI systems “only
to the extent the use meets the definition of permitted use of
artificial intelligence systems” (permitted use of AI systems
is defined as “use of artificial intelligence tools or systems
by a licensed professional to assist in providing administrative
support or supplementary support where the licensed professional
maintains full responsibility for all interactions, outputs, and
data use associated with the system”). Prohibits licensed
professionals from using AI tools for supplementary support unless
the patient or their legal representative is informed of the use of
AI and its specific purpose, and the patient or their legal
representative provides consent for the use of AI. Prohibits
licensed professionals from allowing an AI system to do any of the
following: “(1) make independent therapeutic decisions; (2)
directly interact with clients in any form of therapeutic
communication; (3) generate therapeutic recommendations or
treatment plans without review and approval by the licensed
professional; or (4) detect emotions or mental states.” Sets
exceptions for religious counseling, peer support, and
self-help/educational resources that are publicly available and do
not purport to offer therapy or psychotherapy services.


Date Enacted: 8/1/2025


Date Effective: 8/1/2025

Kansas

HB 2313 prohibits government entities in
Kansas from installing or using any AI “platform[s] of
concern” on state electronic devices owned or issued to an
employee by a state agency. Platforms of concern include DeepSeek;
any AI models controlled directly or indirectly by China (including
Hong Kong), Cuba, Iran, North Korea, Russia, or Venezuela. Does not
include Taiwan.


Date Enacted: 4/8/2025


Date Effective: 7/1/2025

Maine*

HP 1154 prohibits the use of artificial
intelligence chatbots or similar technologies in trade and commerce
in a manner that may mislead or deceive consumers into believing
they are interacting with a human being, unless the consumer is
clearly and conspicuously notified that they are not engaging with
a human being.


Date Enacted: 6/12/2025


Date Effective: 6/18/2025

Maryland* 

HB 820 requires carriers (including health
insurers, dental benefit plans, pharmacy benefit managers that
provide utilization review, and any health benefit plans subject to
regulation by the state) to ensure that any AI tools used for
utilization review base decisions on medical/clinical history,
individual circumstances, and clinical information; does not solely
leverage group datasets to make decisions; does not “replace
the role of a health care provider in the determination
process”; does not result in discrimination; is open for
inspection/audit; does not directly or indirectly cause harm; and
patient data is not used beyond its intended use. The law mandates
that AI tools may not “deny, delay or modify health care
services.”


Date Enacted: 5/20/2025


Date Effective: 10/1/2025

Montana

HB 178 prohibits the AI use by government
entities to “classify a person or group based on behavior,
socioeconomic status, or personal characteristics resulting in
unlawful discrimination.” Requires government entities provide
disclosures on any published material posted by AI not reviewed by
a human.


Date Enacted: 5/5/2025


Date Effective: 10/1/2025

Nebraska*

LB 77 establishes that AI algorithms may
not be the “sole basis” of a “utilization review
agent’s” (defined as any person or entity that performs
utilization review) decision to “deny, delay, or modify health
care services” based whole or in part on medical necessity.
The law requires utilization review agents to disclose use of AI in
utilization review process to each health care provider in its
network, to each enrollee, and on its public website.


Date Enacted: 6/4/2025


Date Effective: 1/1/2026

Nevada* 

AB 406 prohibits AI “providers”
from “explicitly or implicitly” indicating that an AI
system is capable of providing or is providing professional mental
or behavioral health care. Prohibits providers of mental and
behavioral health care from using or providing AI systems in
connection to the direct provision of care to patients. Sets forth
that providers may use AI tools to support administrative tasks
provided that the provider must 1) ensure that use complies with
all applicable federal and state laws governing patient privacy and
security of EHRs, health-related information, and other data,
including HIPAA, and 2) review the accuracy of any report, data, or
information compiled, summarized, analyzed, or generated by AI
systems. The law requires the state agency to develop public
education material focusing on, amongst other topics, best
practices for AI use by individuals seeking mental or behavioral
health care or experiencing a mental or behavioral health event.
Additionally, the law prohibits all public schools (including
charter schools or university schools) from using AI to
“perform the functions and duties of a school counselor,
school psychologist, or school social worker” as related to
student mental health. 


Date Enacted: 6/5/2025


Date Effective: Upon passage and approval for the purpose of
adopting any regulations and performing any other necessary
preparatory administrative tasks to carry out provisions of this
act. 7/1/2025 for all other purposes.

New Mexico

HB 178 establishes that the Board of
Nursing shall “promulgate rules establishing standards for the
use of artificial intelligence in nursing.”


Date Enacted: 4/8/2025


Date Effective: 6/20/2025

New York* 

SB 3008 prohibits any person or entity to
operate or provide an “AI companion” to someone in New
York unless the model contains a protocol to take reasonable effort
to detect and address suicidal ideation or expressions of self-harm
expressed by the user. Requires protocols to, at a minimum: (1)
detect user expressions of suicidal ideation or self-harm, and (2)
refer users to crisis service providers (e.g., suicide prevention
and behavioral health crisis hotlines) or other appropriate crisis
services, when suicidal ideations or thoughts of self-harm are
detected. Requires that AI companion operators provide a
“clear and conspicuous” notification—either
verbally or in writing—that the user is not communicating
with a human; that notification must occur at the beginning of any
AI companion interaction, and at least every three hours after for
continuous interactions. Sets forth that the Attorney General has
oversight authority and can impose penalties of $15,000/day on an
operator that violates the law.


Date Enacted: 5/9/2025


Date Effective: 11/5/2025

Oregon*

HB 2748 mandates that “nonhuman”
entities, including AI tools, may not use the title of nurse or
similar titles, including advanced practice registered nurse,
certified registered nurse anesthetist, clinical nurse specialist,
nurse practitioner, medication aide, certified medication aide,
nursing aide, nursing assistant, or certified nursing
assistant.


Date Enacted: 6/24/2025


Date Effective: 1/1/2026

Texas*

HB 149 sets requirements for government
agency and non-governmental use of AI. Requirements for government
agencies include: mandating that government agencies using AI
systems that interact with consumers clearly and conspicuously
disclose to each consumer, before or at the time of interaction,
that the consumer is interacting with an AI system; prohibiting
government entities from using AI systems that produce social
scoring, or developing or deploying an AI system that uses
biometric identifiers to uniquely identify individuals if that use
infringes on constitutional rights; and establishing an AI
Regulatory Sandbox Program and creates the “Texas Artificial
Intelligence Council.” 


Requirements for non-governmental developers and deployers of AI
include: prohibiting deployers from deploying AI systems that aim
to “incite or encourage” a user to commit self-harm, harm
another person, or engage in criminal activity, and prohibiting
development or deployment of AI systems that discriminate.


An AI system deployed in relation to health care services or
treatments must be disclosed by the provider to the recipient of
health services or their personal representative on the date of
service, except in emergencies, when the provider shall disclose as
soon as reasonably possible.


Date Enacted: 6/22/2025


Date Effective: 1/1/2026

Texas*

SB 815 prohibits a utilization review
agent’s use of an automated decision system (defined as an
algorithm or AI that makes, recommends, or suggests certain
determinations) to “make, wholly or partly, an adverse
determination.” Adverse determinations are defined as
determinations that services are not medical necessary or
appropriate, or are experimental or investigational. Sets forth
that the use of algorithms, AI, or automated decision systems for
administrative support or fraud detection is allowable. Empowers
the Commissioner of Insurance to audit and inspect use of
tools.


Date Enacted: 6/20/2025


Date Effective: 9/1/2025

Texas*

SB 1188 requires providers leveraging AI
for diagnostic or other purposes to “review all information
created with artificial intelligence in a manner that is consistent
with medical records standards developed by the Texas Medical
Board.” In addition, a provider using AI for diagnostic
purposes must disclose the use of the technology to their
patients.


Date Enacted: 6/20/2025


Date Effective: 9/1/2025

Utah*

SB 226 repealed Utah SB 149 disclosure
provisions and replaced them with disclosure requirements that are
similar but required in more narrow scenarios. As with SB 149, the
law requires “regulated occupations” to prominently
disclose that they are using computer-driven responses before they
begin using generative AI for any oral or electronic messaging with
an end user. However, this disclosure is only required when the
generative AI is “high-risk,” which is defined as (a) the
collection of personal information, including health, financial or
biometric data and (b) the provision of personalized
recommendations that could be relied upon to make significant
personal determinations, including medical, legal, financial, or
mental health advice or services.


Relatedly, in 2025, SB
332 passed, which extended the repeal date of SB 149 to
July 1, 2027.


Date Enacted: 3/27/2025


Date Effective: 5/7/2025

Utah*

HB 452 requires suppliers of “mental
health chatbots” to clearly and conspicuously disclose that
the chatbot is AI technology and not a human at the beginning of
any interaction, before the user access features of the chatbot,
and any time the user asks or otherwise prompts the chatbot about
whether AI is being used. Prohibits
“suppliers” of mental health chatbots
from:


  • Selling or sharing individually identifiable health information
    or user input with any third party, except if that information is
    (a) requested by a health care provider with a user’s consent;
    (b) provided to a health plan of a Utah user upon a user’s
    request; or (c) shared by the supplier to ensure the effective
    functionality of the tool, provided that the supplier and the
    recipient of such information comply with HIPAA regulations (as if
    the supplier were a covered entity and the other entity were a
    business associate).

  • Advertising a specific product or service during the
    conversation unless the chatbot clearly and conspicuously
    identifies the advertisement as an advertisement and clearly and
    conspicuously discloses any sponsorships, business affiliations, or
    agreements that the supplier has with third parties to promote the
    product or service. The law also prohibits any targeted
    advertisement based on the user’s input.


The law does not preclude chatbots from recommending that users
seek counseling, therapy or other assistance, as necessary.


The Attorney General may impose penalties for violations of this
law.


Finally, the law states that it is an affirmative defense to
liability under the law if the supplier demonstrates that they
maintained documentation that describes development and
implementation of the AI model that complies with the law and
maintains a policy that meets a long list of requirements,
including ensuring that a licensed mental health therapist was
involved in the development and review process and has procedures
which prioritize user mental health and safety over engagement
metrics or profit. In order for the affirmative defense to be
available, the policy must be filed with the Division of Consumer
Protection. 


Date Enacted: 3/25/2025


Date Effective: 5/7/2025

Other: State Activity Laws

Over the past several decades, states have sought to understand
AI technology before regulating it. For example, states have
created councils to study AI and/or created AI-policy positions
within government in charge of establishing AI governance and
policy. States have additionally tracked use of AI technology
within state agencies. These bills reflect states’ interest in
the potential role of AI across industries, and potentially in
health care.


The following passed in 2025: Alabama HB
365, Arkansas HB 1958, California AB
979, Delaware HJR
7, Georgia SR
391, Hawaii SB
742, Kentucky SB
4, Maryland SB
705, Maryland HB
956, Mississippi SB
2426, Montana HJR
4, New York SB
822, Oregon HB
3936, Rhode Island SR
8, Texas HB 149 (certain provisions), Texas HB
3512, Texas SB
1964, and West Virginia HB
3187.

Key Federal Activity
























 

2025 Activity To-Date

White House

  • The Trump Administration released “Winning the Race: America’s AI Action
    Plan,” which declared U.S. global dominance in AI a
    national imperative and outlines a comprehensive roadmap based on
    three key pillars: innovation, infrastructure, and international
    diplomacy.

  • On September 30, President Trump signed an EO to advance the use of AI in the
    NIH’s Childhood Cancer Data Initiative. The EO
    directs the MAHA Commission to identify opportunities
    within CCDI to strengthen data platforms and fund research that
    builds AI-ready infrastructure, advances predictive modeling and
    biomarker discovery, and optimizes clinical trial processes and
    participant selection, and instructs HHS, OMB, and the APST to use
    existing federal funds to increase investment in CCDI.

  • White House Office of Science and Technology Policy Request for Information (RFI) on
    regulatory reform for artificial intelligence.

  • Trump Executive Order revoked Biden Administration’s
    AI Executive Order (Jan 2025).

  • Policies on federal AI use and
    procurement (April 2025).

  • Public comment on AI Action Plan (concluded
    mid-March 2025).

  • OMB issued a memo focused on government adoption of AI
    services.

  • General de-regulatory approach and emphasis from administration
    on using AI to identify instances of fraud, waste, and abuse.

Congress

  • In October 2025, the Senate HELP Committee held a hearing to examine opportunities to
    leverage AI across health care, education, and the workforce,
    including to streamline clinical trials and reduce administrative
    burdens.

  • Initial drafts of H.R. 1 included a 10-year moratorium on state
    legislation of AI. After much debate, this provision was struck
    from final law on July 4, 2025.

  • Bills to:
    • Establish an AI SANDBOX program
      (September 2025).

    • Establish the National AI Research
      Resource initiative (March 2025).

    • Allow AI and machine learning to
      prescribe medication (January 2025).

    • Allow Medicare payment pathway for
      AI-enabled devices (April 2025).


Several others that touch on AI in health care and which we will
report on if they gain traction.

HHS Appointments and Announcements

  • In September, 2025, HHS announced it doubled funding for the
    Childhood Cancer Data Initiative at the National Cancer Institute,
    designed to accelerate the development of improved diagnostics,
    treatments, and prevention strategies by leveraging AI, in line
    with President Trump’s EO on AI.

  • In May 2025, HHS designated Peter Bowman-Davis acting Chief AI
    Officer at HHS.

  • In May 2025, Secretary Kennedy indicated that HHS is already
    leveraging AI in standard operations, with attention to advancing
    novel treatments. The week before inauguration in January, HHS
    announced they’d hired three executive positions: Chief AI
    Officer (Dr. Meghan Dierks), Chief Data Officer (Kristen Honey), and Chief Technology Officer
    (Alicia Rouault), all central to Biden’s AI
    strategy roadmap. In mid-February, it was reported that all three
    executives were on administrative leave. As of October 15, 2025,
    the HHS Employee Directory indicates Kristen Honey
    remains an HHS employee, serving as Chief Data Officer. Alicia
    Rouault and Dr. Meghan Dierks left HHS in May 2025.

OCR

  • Non-Discrimination rule is subject to ongoing litigation; the
    first Trump Administration reversed a prior version of the
    rule.

ONC

  • In May 2025, ONC and CMS issued a request for information seeking public
    feedback on digital tools – including AI – that can
    improve Medicare beneficiary access, improve interoperability, and
    reduce administrative burden.

CMS

  • In September 2025, released updated CMS
    Artificial Intelligence Playbook.

  • CY2026 Proposed Medicare Physician Fee Schedule requested
    public comments on appropriate payment strategies for software as a
    service and artificial intelligence.

  • On June 27, CMS launched a new model, the Wasteful and
    Inappropriate Service Reduction Model, to partner with technology
    companies to provide use AI to “improv[e] and expedit[e]”
    prior authorization process compared to Original Medicare’s
    existing processes to reduce fraud for several
    services/products.

  • In its final rule for CY 2026, CMS chose not to
    finalize provisions regarding Medicare Advantage use of AI, but
    acknowledged the “broad interest” in AI and “will
    continue to consider the extent to which it may be appropriate to
    engage in future rulemaking in this area.”

  • In May 2025, ONC and CMS issued a request for information seeking public
    feedback on digital tools – including AI – that can
    improve Medicare beneficiary access, improve interoperability, and
    reduce administrative burden.

  • Under the Meaningful Measures
    2.0
     strategy, CMS is prioritizing digital quality
    measures, including using AI to identify and address quality
    issues.

  • Dr. Mehmet Oz, the new administrator for CMS, has been reported
    as promoting the use of artificial intelligence at CMS, in
    particular to combat fraud, waste and abuse, and possibly
    using AI avatars instead of frontline health care workers as a way
    to reduce costs without compromising quality. The Medicaid and CHIP
    Payment and Access Commission presented study findings on the use of AI in prior
    authorization processes in Medicaid (February 2025)

FDA

  • In September 2025, FDA issued a request for public comment on approaches
    to measuring and evaluating AI-enabled medical device performance
    in the real world.

  • Draft guidance for developers of AI-enabled
    medical devices.

  • In June 2025, launched “Elsa” to support departmental
    efficiency

  • In May 2025, FDA announced completion of its first
    AI-assisted scientific review pilot and agency-wide AI rollout. In
    May 2025, Jeremy Walsh was hired as FDA’s chief AI officer and
    head of IT.

NIH


DOJ

Litigation continues over alleged use of AI to deny Medicare
Advantage claims. In June 2025, DOJ announced charges against over 300
defendants for participation in health care fraud schemes, with a
parallel announcement from CMS on the successful prevention of $4
billion in payments for false and fraudulent claims.

FTC

  • In September 2025, FTC announced the launch of an inquiry into
    AI chatbots acting as companions, with particular attention to the
    impact of these chatbots on children and teenagers.

  • In September 2025, FTC issued a request for public comment on measuring
    and evaluating the performance of AI-enabled medical devices. In
    January 2025, FTC published a blog post noting department’s
    focus on potential AI harms and also reinforced that existing laws
    apply to AI technologies. This post, and other posts focused on AI,
    were taken down in October 2025.

  • In January 2025, FTC published a blog post noting
    department’s focus on potential AI harms and also reinforced
    that existing laws apply to AI technologies. This post, and other
    posts focused on AI, were taken down in October 2025.

Footnotes

1. New York has subsequently introduced additional
chatbot laws.

2. Harm is broadly defined to include encouraging
self-harm, suicidal ideation, disordered eating, consumptions of
drugs or alcohol, or violence; offering mental health therapy
without oversight from a licensed provider; encouraging harm to
others or participation in illegal activity; engaging in erotic or
sexually explicit interactions; prioritizing validation of the
user’s beliefs, preferences, or desires over factual accuracy
or safety; or optimizing engagement over safety
guardrails.

3. “Frontier developer” is defined as a person
who has trained, or initiated the training of, a frontier model,
with respect to which the person has used, or intends to use a
computing power of greater than 10^26 integer or floating-point
operations, including computing for the original training run and
for any subsequent fine-tuning, reinforcement learning, or other
material modifications the developer applies to a preceding
foundation model. “Large frontier developer” is defined
as a frontier developer that together with its affiliates
collectively had annual gross revenues in excess of five hundred
million dollars ($500,000,000) in the preceding calendar
year.

4. Catastrophic risk” is defined as a
“foreseeable and material risk that a frontier developer’s
development, storage, use, or deployment of a frontier model will
materially contribute to the death of, or serious injury to, more
than 50 people or more than one billion dollars in damage to, or
loss of, property arising from a single incident involving 1) a
frontier model providing expert-level assistance in the creation or
release of a chemical, biological, radiological, or nuclear weapon,
2) engaging in conduct with no meaningful human oversight,
intervention, or supervision that is either a cyberattack or, if
the conduct had been committed by a human, would constitute the
crime of murder, assault, extortion, or theft, including theft by
false pretense, or 3) evading the control of its frontier
developer or user.

5. “Critical safety incidents” are defined as
1) unauthorized access to, modification of, or exfiltration of, the
model weights of a frontier model that results in death or bodily
injury; (2) harm resulting from the materialization of a
catastrophic risk; 3) loss of control of a frontier model causing
death or bodily injury or 4) a frontier model that uses deceptive
techniques against the frontier developer to subvert the controls
or monitoring of its frontier developer outside of the context of
an evaluation designed to elicit this behavior and in a manner that
demonstrates materially increased catastrophic risk.

6. This analysis was exclusively distributed to Manatt on Health subscribers on
July 28, 2025.

7.

8. “Supplier” means a seller, lessor, assignor,
offeror, broker or other person who regularly solicits, engages in
or enforces consumer transactions, whether or not the person deals
directly with the consumer. Utah Code 13-11-3.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

link

Exit mobile version