OpenAI Updates ChatGPT Policy to Ban Legal, Medical, and Other Professional Advice
OpenAI has revised its ChatGPT usage policy to forbid using the AI system to offer legal, medical, or other advice that requires a professional license.
The changes took effect on October 29 and are outlined in the company’s official Usage Policies.
The new rules prohibit users from using ChatGPT for:
Consultations that require professional certification (such as medical or legal advice)
Facial or personal recognition without consent
Making critical decisions without human oversight in areas like finance, education, housing, migration, or employment
Academic misconduct or manipulation of evaluation results
According to OpenAI, the revised policy aims to improve user safety and prevent potential harm that could arise from using the system beyond its intended scope.
NEXTA reports that ChatGPT will no longer provide direct financial, legal, or medical advice. Officially, the platform is now described as an “educational tool” rather than a “consultant.”
The move is said to be motivated by regulatory and liability concerns, in an effort to reduce the risk of lawsuits.
Rather than giving specific guidance, ChatGPT will now focus on explaining principles, outlining general mechanisms, and recommending users consult qualified professionals, whether that’s a doctor, lawyer, or financial expert.
The new detailed rules specify that there will be “no more naming medications or giving dosages… no lawsuit templates… no investment tips or buy/sell suggestions.”
This policy shift directly addresses the long-standing concerns surrounding AI’s potential to overstep ethical and professional boundaries.


Comments
Post a Comment