
New York State legislators are advancing a high-profile bill aimed at tightly restricting what AI chatbots can say and do when it comes to matters that traditionally require licensed professionals — such as legal counsel and medical advice.
The bill would block AI chatbots from providing legal or medical advice
Companies would need to clearly disclose when users are interacting with an AI system
Consumers harmed after relying on prohibited AI advice could sue for damages
Senate Bill S7263, introduced by state Senator Kristen González, would amend New York’s General Business Law to prohibit AI chatbots from providing “substantive responses, information, or advice” that would, in essence, replace the services of a licensed profession.
“This bill would prohibit a chatbot to give substantive responses; information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions who licensure is governed by the education law or the judiciary law,” according to the NY State Senate.
That includes fields ranging from law and medicine to dentistry, nursing, engineering, and mental health services.
Under the legislation, AI operators must provide clear notice that users are interacting with an AI system — in the same language as the chatbot and in a visible font. Still, disclaimers alone do not shield them from liability.
It would bar chatbot responses that mimic the actions of licensed professionals, such as diagnosing a health condition, drafting bespoke legal documents, or interpreting specific legal rights.
Individuals harmed after relying on prohibited AI advice could file a civil lawsuit against the chatbot operator for damages and legal costs.
The bill unanimously cleared the state Senate’s Internet and Technology Committee and is now awaiting broader legislative consideration.
Supporters argue that AI tools have grown powerful enough to influence real-world decisions and that many users may not appreciate the limits of machine-generated responses.
State policymakers cite concerns that misleading AI advice could lead consumers into serious legal trouble or pose health risks, especially when the guidance seems authoritative but is inaccurate.
Citing research by the American Psychological Association, Senator Kristen Gonzalez warns that bots “failed to challenge users' beliefs even when they became dangerous; on the contrary, they encouraged them.” If this advice was given by a human therapist, those answers “could have resulted in the loss of a license to practice, or civil or criminal liability.”
The move represents one of the most concrete examples yet of state-level regulation attempting to balance innovation with consumer protection in the rapidly evolving world of generative AI.
Legal experts, however, caution that such rules may chill useful access to information for everyday users — particularly those who can’t afford traditional professional fees.
Opponents also warn this approach could inadvertently stifle innovation or create inconsistent rules across different states.
As highlighted by FastCompany, a recent JAMA Network study found that 13% of respondents used chatbots for mental health advice. Among people aged 18 to 21, 22% did so.
Some argue that mental health advice from a chatbot is better than no advice at all, but the American Psychological Association says chatbots could actually be worse.
The reason: many AI models are trained to relate to humans in a sycophantic way, reinforcing, rather than challenging a user’s thinking — which could ultimately “drive vulnerable people to harm themselves or others.”
Whether you live in New York or are simply an AI user, here are some key points to consider:
You may also want to read:
EU Opens Probe into Grok Image Manipulation and X’s Recommender System
Which Big Tech Companies Do You Trust (or Not)? We Asked Netizens
tags
Filip has 17 years of experience in technology journalism. In recent years, he has focused on cybersecurity in his role as a Security Analyst at Bitdefender.
View all posts