Artificial intelligence is making its way into psychiatry as a tool for patient monitoring, treatment recommendations, and detecting signs of depression or schizophrenia in speech patterns. But while the field is rapidly evolving, it is essentially unregulated, leaving psychiatrists to figure out how to help patients benefit from the technology without causing harm, says Jacques Ambrose, M.D., MPH, MBA, FAPA, a psychiatrist and chief clinical integration officer in the Department of Psychiatry at NewYork-Presbyterian and Columbia.
Dr. Ambrose and his colleagues recently published a commentary in the International Review of Psychiatry outlining the potential benefits of AI and the areas where caution is warranted. Below, he discusses those benefits and limitations, and encourages other psychiatrists to learn more about AI to help influence how it is used in the care of patients.
The Role of AI in Psychiatry
AI has the potential to make mental health care more accessible, more personalized, and more efficient with the right supervision. But it’s important to remember that AI is a tool, and tools can be used for both good and bad.

AI has the potential to improve access to mental health care but it also creates concerns around bias and privacy.
One of the major potential advantages of AI is the ability to improve access to mental health care. There were a lot of instances, especially during the COVID-19 pandemic, where patients desperately needed mental health support and turned to ChatGPT for psychotherapy, even though that’s not what the AI model was trained to do. From that point, there’s really been an explosion of startups looking at ways to train large language models to be used as chatbots to help individuals who are currently unable to access mental health services.
AI tools also have the potential to be powerful assistants for the psychiatry workforce. By automating a lot of the administrative tasks currently done by physicians — filling out forms and paperwork — AI can help psychiatrists focus on patient care and potentially reduce clinician burnout. But one of the most exciting areas where I see opportunity is in personalizing medicine. Large language models could analyze the vast amount of patient data in psychiatry to provide more tailored treatment recommendations than clinicians are able to provide today. This is where supervision of AI by the treating psychiatrist is essential and where a partnership between the clinician and the technology could potentially improve outcomes for patients.
Even as we see this explosion of activity and vigor around AI in medicine, we need to understand the potentially inappropriate uses or inadvertent negative outcomes from its application.
— Dr. Jacques Ambrose
AI Risks to Watch For
Even as we see this explosion of activity and vigor around AI in medicine, we need to understand the potentially inappropriate uses or inadvertent negative outcomes from its application. To start, there is very little transparency about the data used to train AI models, which means that it could contain inherent biases that clinicians may not even know about.
Another significant concern is data privacy. Our patients receive disclosure about how their data will be used and stored based on their consent, but that’s not the case when they input their information into AI applications like ChatGPT. I’ve worked with teens and adolescents who have disclosed very personal, specific, and identifiable information on those platforms. I know they were giving these details to get a better output, but the AI application is just seeking to capture as much data as possible and you don’t know how it will be used. As we move forward, sensitive patient data must be protected from both misuse and inappropriate storage.
Finally, we must consider the human-to-human connection that is an essential part of medicine, and especially psychiatry. Today, many AI scribes use audio recordings to assist in clinical documentation, but in psychiatry we know that the words that a patient uses only tells part of the story. Nonverbal cues, like avoiding eye contact or fidgeting, are missed when only the words are transcribed. AI has a role to assist in decision-making, but it should be integrated into practice as a tool — not as a replacement for human-to-human interaction and the judgment of a trained psychiatrist.
I encourage the psychiatry community to get educated about AI’s capabilities and limitations in mental health care, and to help patients better understand these as well. For those who are interested in a more active role, you can advocate for how AI is integrated into health care, specifically working to safeguard patient privacy and ensure that AI tools are transparent, evidence-based, and free from bias. Together, we can guide the development of AI to make sure it aligns with our ethics and the tenets of patient-centered care.