
Chatbots pose significant risks for individuals with eating disorders, often promoting harmful behaviors and misinformation while lacking proper safety measures.
OpenAI’s public release of ChatGPT 3 years ago was both premature and deceptive. Labeled a “free research preview,” and disguised as a large beta test, ChatGPT instead went viral, attracting 100 million users within just 2 months. Other popular chatbots released soon after also lacked stress-testing for safety and systematic methods of identifying, reporting, and correcting real world adverse effects. More than half of Americans now use chatbots regularly, a quarter do so many times a day. These AI bots are particularly popular with teens and young adults—the 2 demographics most associated with eating disorders.
Why are chatbots so harmful for patients with eating disorders and also for individuals who are vulnerable to developing eating disorders? Engagement is the highest priority of chatbot programming, intended to seduce users into spending maximum time on screens. This makes chatbots great companions—they are available 24/7, always agreeable, understanding, and empathic, while never judgmental, confronting, or reality testing. But chatbots can also become unwitting collaborators, harmfully validating self-destructive eating patterns and body image distortions of patients with eating disorders. Engagement and validation are wonderful therapeutic tools for some problems, but too often are dangerous accelerants for eating disorders.1
Chatbots are also filled with harmful eating disorder information and advice. Their enormous data base includes high level scientific articles, but also low-level Reddit entries and profit-generating promotional advertisements from the 70-billion-dollar diet industry. Not surprisingly, bots frequently validate dangerous concerns about body image and so-called healthy eating. And chatbot hallucinations sometimes fabricate nonexistent, supposedly clinical studies justifying dangerous advice. Users cannot easily separate wheat from chaff and at the same time tend to anthropomorphize bots, giving the AI pronouncements an authority they do not deserve.
Read the full commentary at Psychiatric Times
September 9, 2025
