Safe Use of AI: Benefits, Risks, and Boundaries from a Therapist with a Tech Background.

Safe Use of AI: Benefits, Risks, and Boundaries from a Therapist with a Tech Background.

As a Licensed Professional Counselor with a background in technology, I’m watching a concerning trend unfold: people are increasingly turning to AI for mental health support. Surveys suggest that AI-powered mental health apps have reached over 20 million users worldwide (Nikola Roza, 2025). In my practice, I see that most don’t understand the significant risks involved. While AI can be helpful for certain tasks, not recognizing privacy vulnerabilities, technical limitations, and appropriate boundaries can create real problems.

A note to readers: This article isn’t meant to scare you away from AI tools. My intent is to inform you so you can make educated decisions about these technologies. This isn’t about avoiding AI entirely—it’s about using them with your eyes open, understanding both their potential benefits and their very real limitations.

The Privacy Issue

In May 2025, a federal court ordered OpenAI to preserve all ChatGPT conversations indefinitely—including chats that users had previously deleted or marked “temporary.” The order is retroactive, meaning conversations from months or even years ago remain archived, despite earlier policies that indicated they would be permanently deleted within 30 days (OpenAI, 2025).

In September 2025, the court modified its order: OpenAI is no longer required to preserve all new conversations going forward. However, all data collected between May and September 2025 remains preserved, and OpenAI must continue retaining conversations from accounts flagged by The New York Times. This partial relief still leaves millions of past conversations indefinitely archived and creates ongoing uncertainty about which current users’ data may be preserved.

As Florian Tramèr, a professor of computer science at ETH Zürich, cautioned, this ruling represents “a disaster from a security and privacy perspective” (Tramèr, 2023). Unlike therapy, which is governed by confidentiality laws, AI conversations exist in a legal gray zone with few safeguards.
If you are using ChatGPT Free, Plus, Pro, or Team plans, your conversations from May through September 2025 are stored indefinitely under legal hold, and current conversations may be preserved if your account is flagged. Enterprise, Edu, and API customers with Zero Data Retention agreements are not affected. Other AI tools, including Claude and Pi, also retain conversations, often with human review, although policies vary (Gumusel, 2025).

The Sophistication Paradox

Research shows that as language models become more advanced, they develop concerning reliability issues in conversational settings. This occurs due to what researchers call “context degradation syndrome”—as conversations extend, models lose track of earlier inputs and generate answers that are repetitive, contradictory, or misleading. These failures are often delivered in a highly confident, authoritative tone, making them harder for users to detect (Howard, 2024).

For someone working through trauma or crises, the risk is not just inaccuracy but the possibility of advice that contradicts earlier disclosures, triggers, or safety planning.

AI Model Limitations

AI models are trained on data with specific cutoff dates, creating a “knowledge freeze” at a particular point in time. While most modern AI systems can access real-time information through web search, they only do so when explicitly prompted by the user.

This creates a significant blind spot: most users assume AI automatically knows current information. Instead, AI will confidently provide potentially outdated answers from its training data without suggesting a search or acknowledging that newer information might be available.

In mental health contexts, this means responses about crisis resources, medication updates, or local services may be inaccurate or outdated—simply because the user didn’t know to ask for a real-time search. Compare “What are suicide prevention resources?” versus “Please search for current suicide prevention resources online”—the first gets outdated training data, the second gets current information.

Moreover, these systems work as “black boxes” because of their extreme complexity. Modern AI models can have “hundreds or even thousands of layers, each containing multiple neurons, which are bundles of code designed to mimic functions of the human brain” (IBM, 2025). These systems process information through “millions, or more likely now billions, of numbers” in ways that are “so complex that even the creators themselves do not understand exactly what happens inside them” (IBM, 2025).

This creates a fundamental problem: even when you can see the system’s structure, “you cannot interpret what happens within each layer of the model when it’s active” (IBM, 2025). The decision-making process becomes distributed across millions of calculations in ways that no human can follow or predict. This opacity makes it nearly impossible to guarantee appropriate responses in high-stakes contexts such as crisis intervention.

Contextual Understanding Limitations

Beyond outdated information, AI has trouble understanding the full context of someone’s situation. Research shows that AI “struggles with contextual understanding since it cannot construct a holistic understanding of an individual’s life experiences, because it is unable to recognize emotional meaning in context” (Salil et al., 2025).

This limitation becomes more pronounced during periods of rapid change. When people’s mental health concerns are tied to current events or evolving circumstances, AI may provide advice based on outdated social conditions or miss how contemporary developments are affecting someone’s wellbeing.
Studies have found that “humans naturally excel in understanding and adapting to complex, nuanced contexts, a skill that is crucial for providing practical advice. Unlike AI, humans can draw from a rich reservoir of personal experiences and social understandings” (Jain et al., 2024).

This means AI may provide technically sound advice very confidently while missing key factors of the current reality someone is navigating, which can result in misguided advice.

Hallucinations: When AI Sounds Convincing but Gets It Wrong

Another serious limitation is hallucination—when AI produces false or fabricated information but presents it as fact. In everyday use, this can be a minor annoyance. In a mental health context, it can be dangerous.

Studies have shown that large language models sometimes make up citations, invent clinical recommendations, or provide descriptions of treatments that don’t exist (Abrams, 2025; Moore et al., 2025). Because these responses are phrased in confident and empathetic language, users may not realize they are inaccurate.

For people seeking support with anxiety, depression, or trauma, hallucinations can reinforce misinformation, delay professional help, or even encourage unsafe self-directed care.

The Training Data Crisis

AI developers are facing a shortage of high-quality, human-generated training data. Former OpenAI chief scientist Ilya Sutskever acknowledged that the industry has “exhausted basically the cumulative sum of human knowledge.” Projections suggest companies will run out of human text sources between 2026–2032.

As a result, AI models are increasingly trained on synthetic content generated by other AIs, a process linked to “model collapse.” This occurs when successive generations of models lose originality, producing repetitive, nonsensical, or inaccurate outputs (Shumailov et al., 2024).

For mental health, this trend means that AI guidance may grow less reliable over time, as it drifts further from human experience and clinical wisdom.

How Models Are Trained

Most people assume that therapy chatbots are designed with clinical expertise at their core. In reality, they are optimized to produce responses that sound supportive and engaging, not necessarily evidence-based. The human raters who guide their development are usually not trained clinicians (OpenAI, 2022; Arize, 2023).

A Stanford study highlighted the dangers of this approach: when chatbots were presented with indirect suicidal ideation (e.g., “asking about bridges” following job loss), they responded with literal information about bridges rather than detecting self-harm risk (Moore et al., 2025).

The Hidden Risk: Embedded AI in Search

The integration of AI into search engines creates another vulnerability. Depression-related search queries on Google grew 67% between 2010 and 2021 and are projected to increase further by 2025. Few realize that these queries are automatically processed, logged, and sometimes reviewed by humans.

Google stores Gemini conversations for up to 18 months by default and retains human-reviewed content for up to three years, even if users delete their activity. Microsoft’s Copilot similarly retains records, with deleted conversations accessible through legal discovery for days or weeks (Microsoft Support, 2025; Google, 2025).

Sensitive mental health searches—such as queries about panic attacks, abuse, or suicidal thoughts—may become part of a long-term corporate record, without the protections afforded to therapy records.

Boundaries: Safe vs. Unsafe Uses

AI may be useful for:

  • Psychoeducation: learning about conditions, symptoms, and treatments.
  • Skill practice: CBT exercises, thought-challenging, or communication scripts.
  • Basic support: brainstorming self-care strategies, guided coping exercises.

AI should not replace:

  • Crisis intervention—where chatbots remain inconsistent and unsafe (McBain et al., 2025).
  • Diagnosis or treatment planning.
  • Trauma processing—which requires co-regulation with another human.
  • Medication guidance.

AI may serve as a digital notepad that responds, useful in the way a guided workbook might be—but it cannot replace the role of trained professionals.

Red Flags

Based on clinical experience, warning signs of unhealthy AI use include:

  • Emotional dependence (needing daily AI conversations to cope).
  • Substitution (using AI instead of reaching out to friends or therapists).
  • Crisis reliance (turning to AI during suicidal thoughts or acute distress).
  • Reality distortion (believing AI “understands” or represents a real relationship).

Researchers have even documented instances of “chatbot psychosis,” where users developed delusional attachments to AI systems (Schoene & Canca, 2025).

Conclusion

The evidence is clear: AI in mental health comes with both promise and significant risks that most users don’t understand. From privacy vulnerabilities and contextual blindness to hallucinations and training limitations, these systems have fundamental gaps that can be dangerous when addressing mental health concerns.

The goal isn’t to avoid AI entirely, but to use it with full awareness of what you’re getting into. Know that your conversations may be stored indefinitely. Understand that AI can sound confident while being wrong about critical information. Recognize that it may miss the broader context of your situation, especially during times of rapid change.

When you do use AI tools, treat them as you would any other resource—helpful for learning, practicing skills, or brainstorming, but not as replacements for human judgment, professional care, or crisis support. Ask for current information when you need it. Verify important advice. And remember that sounding empathetic isn’t the same as truly understanding your experience.

Your mental health deserves informed choices. Use these tools as supplements to—never substitutes for—professional care and human connection. The technology will keep evolving, but your awareness of its limitations is your best protection.

About the Author

I am a Licensed Professional Counselor (LPC) and Registered Art Therapist (ATR) with over 17 years of clinical experience specializing in addiction, trauma, anxiety, depression, and digital wellness. My work bridges mental health practice with a professional background in technology systems, including enterprise operations, database design, and large-scale technology implementations in corporate and financial settings.

I previously served as clinical director of multiple addiction treatment centers and have been in private practice for over a decade. Drawing on both clinical training and technical expertise, I developed Integrated Digital Therapy™ (IDT), a clinically grounded approach that addresses the psychological, relational, and systemic impacts of technology use on mental health.

My current work focuses on the safe and ethical use of artificial intelligence in mental health contexts, with particular attention to privacy, risk, and clinical boundaries. I conduct independent research on AI systems and work with local, privacy-preserving models to deepen practical understanding of their behavior, limitations, and real-world implications. I am also pursuing professional certifications in privacy and artificial intelligence through the International Association of Privacy Professionals (IAPP).

References

Abrams, Z. (2025, March 12). Using generic AI chatbots for mental health support: A dangerous trend. APA Services. Retrieved from https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
Arize. (2023, May 30). OpenAI on Reinforcement Learning With Human Feedback (RLHF). Retrieved from https://arize.com/blog/openai-on-rlhf/
Google. (2025, August 18). Gemini Apps Privacy Hub. Retrieved from https://support.google.com/gemini/answer/13594961
Gumusel, E. (2025). A literature review of user privacy concerns in conversational chatbots: A social informatics approach: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 76(1), 121-154. doi: 10.1002/asi.24898
Howard, J. (2024, November 26). Context Degradation Syndrome: When Large Language Models Lose the Plot. Retrieved from https://jameshoward.us/2024/11/26/context-degradation-syndrome-when-large-language-models-lose-the-plot
IBM. (2025, June 2). What Is Black Box AI and How Does It Work? Retrieved from https://www.ibm.com/think/topics/black-box-ai
Jain, G., Pareek, S., & Carlbring, P. (2024). Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses. Internet Interventions, 36, 100745. doi: 10.1016/j.invent.2024.100745
Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). Lost in the Middle: How Language Models Use Long Contexts. Transactions of the Association for Computational Linguistics, 12, 157-173. doi: 10.1162/tacl_a_00638
McBain, R., Cantor, J. H., Zhang, L. A., Kofner, A., Breslau, J., Stein, B. D., Baker, O., Zhang, F., Burnett, A., Yu, H., & Mehrotra, A. (2025). Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment. Psychiatric Services, 76(9). doi: 10.1176/appi.ps.20250086
Microsoft Support. (2025). Privacy FAQ for Microsoft Copilot. Retrieved from https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (pp. 599-627). doi: 10.1145/3715275.3732039
Nikola Roza. (2025, March 3). AI in Mental Health: Statistics, Facts and Trends Guide for 2025. Retrieved from https://nikolaroza.com/ai-mental-health-statistics-facts-trends/
OpenAI. (2022, March 4). Training language models to follow instructions with human feedback. OpenAI Blog. Retrieved from https://openai.com/research/instruction-following
OpenAI. (2025, June 5). How we’re responding to The New York Times’ data demands in order to protect user privacy. Retrieved from https://openai.com/index/response-to-nyt-data-demands/
Salil, R., Jose, B., Cherian, J., R, S. P., & Vikraman, N. (2025). Digitalized therapy and the unresolved gap between artificial and human empathy. Frontiers in Psychiatry, 15, 1522915. doi: 10.3389/fpsyt.2024.1522915
Schoene, A., & Canca, C. (2025, July 31). AI Chatbots Can Be Manipulated to Give Suicide Advice. TIME. Retrieved from https://time.com/7306661/ai-suicide-self-harm-northeastern-study-chatgpt-perplexity-safeguards-jailbreaking/
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2024). The curse of recursion: Training on generated data makes models forget. Nature, 634(8032), 49-55. doi: 10.1038/s41586-024-07566-y
Tramèr, F. (2023, April 3). Three ways AI chatbots are a security disaster. MIT Technology Review. Retrieved from https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/
Zhou, L., Schellaert, W., Martínez-Plumed, F., Moros-Daval, Y., Ferri, C., & Hernández-Orallo, J. (2024). Larger and more instructable language models become less reliable. Nature, 634(8032), 61-68. doi: 10.1038/s41586-024-07930-y