3 min read

AI Isn’t Your Lawyer or Doctor: New York Lawmakers Say It’s Time to Draw the Line

Filip TRUȚĂ

March 10, 2026

AI Isn’t Your Lawyer or Doctor: New York Lawmakers Say It’s Time to Draw the Line

New York State legislators are advancing a high-profile bill aimed at tightly restricting what AI chatbots can say and do when it comes to matters that traditionally require licensed professionals — such as legal counsel and medical advice.

Key takeaways:

  • The bill would block AI chatbots from providing legal or medical advice

  • Companies would need to clearly disclose when users are interacting with an AI system

  • Consumers harmed after relying on prohibited AI advice could sue for damages

‘Unauthorized use of a professional title’

Senate Bill S7263, introduced by state Senator Kristen González, would amend New York’s General Business Law to prohibit AI chatbots from providing “substantive responses, information, or advice” that would, in essence, replace the services of a licensed profession.

“This bill would prohibit a chatbot to give substantive responses; information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions who licensure is governed by the education law or the judiciary law,” according to the NY State Senate.

That includes fields ranging from law and medicine to dentistry, nursing, engineering, and mental health services.

Under the legislation, AI operators must provide clear notice that users are interacting with an AI system — in the same language as the chatbot and in a visible font. Still, disclaimers alone do not shield them from liability.

It would bar chatbot responses that mimic the actions of licensed professionals, such as diagnosing a health condition, drafting bespoke legal documents, or interpreting specific legal rights.

Individuals harmed after relying on prohibited AI advice could file a civil lawsuit against the chatbot operator for damages and legal costs.

The bill unanimously cleared the state Senate’s Internet and Technology Committee and is now awaiting broader legislative consideration.

Why lawmakers are acting

Supporters argue that AI tools have grown powerful enough to influence real-world decisions and that many users may not appreciate the limits of machine-generated responses.

State policymakers cite concerns that misleading AI advice could lead consumers into serious legal trouble or pose health risks, especially when the guidance seems authoritative but is inaccurate.

Citing research by the American Psychological Association, Senator Kristen Gonzalez warns that bots “failed to challenge users' beliefs even when they became dangerous; on the contrary, they encouraged them.” If this advice was given by a human therapist, those answers “could have resulted in the loss of a license to practice, or civil or criminal liability.”

The move represents one of the most concrete examples yet of state-level regulation attempting to balance innovation with consumer protection in the rapidly evolving world of generative AI.

Some experts oppose the bill

Legal experts, however, caution that such rules may chill useful access to information for everyday users — particularly those who can’t afford traditional professional fees.

Opponents also warn this approach could inadvertently stifle innovation or create inconsistent rules across different states.

As highlighted by FastCompany, a recent JAMA Network study found that 13% of respondents used chatbots for mental health advice. Among people aged 18 to 21, 22% did so.

Others support the bill

Some argue that mental health advice from a chatbot is better than no advice at all, but the American Psychological Association says chatbots could actually be worse.

The reason: many AI models are trained to relate to humans in a sycophantic way, reinforcing, rather than challenging a user’s thinking — which could ultimately “drive vulnerable people to harm themselves or others.”

Advice for consumers

Whether you live in New York or are simply an AI user, here are some key points to consider:

  • AI is great for general information, but not a substitute for licensed expertise. Chatbots can explain legal concepts or medical conditions at a high level, but they are not qualified substitutes for an attorney, physician, or other professional. Always treat AI responses as informational only.
  • When in doubt, double-check with trusted sources. For questions that could affect your rights or well-being — like legal strategy or health decisions — seek guidance from a licensed professional or official regulatory body.
  • Understand the limits of disclaimers. Even if a chatbot tells you it’s “just an AI,” that may not be enough, legally or practically, to protect you if you act on harmful advice.
  • Keep records of important interactions. If you do use AI tools to help with research or decision-making, track what you asked and the responses you received. This may be useful later if issues arise.
  • Watch for evolving AI policy trends. New York’s proposal could presage similar laws in other states or at the federal level. Staying informed helps you adapt how you rely on AI responsibly.

You may also want to read:

EU Opens Probe into Grok Image Manipulation and X’s Recommender System

Which Big Tech Companies Do You Trust (or Not)? We Asked Netizens

Reddit Fined $20 Million for Children’s Privacy Failures

tags


Author


Filip TRUȚĂ

Filip has 17 years of experience in technology journalism. In recent years, he has focused on cybersecurity in his role as a Security Analyst at Bitdefender.

View all posts

You might also like

Bookmarks


loader