New review finds chatbots, wearables, and predictive models can boost support—if privacy and rigor keep pace.
A new systematic review in BMC Psychiatry finds that artificial intelligence (AI) tools—ranging from text‑based chatbots to wearable‑driven monitors and predictive algorithms—are beginning to help with earlier detection, tailored self‑help, and ongoing support for people with mental health concerns. The authors conclude that AI can improve access and personalize care, but warn that data privacy, transparency, and study quality need to catch up.
What’s new
- Across the studies reviewed, AI tools improved engagement and supported symptom reduction, particularly through conversational agents (chatbots) and machine‑learning models that personalize prompts and coping strategies. The review highlights real‑world use of the Wysa app, where users who engaged more frequently reported improvements; 67.7% said the app was helpful.
- The strongest effects tended to come from newer chatbots—those using generative AI, voice, or multimodal inputs—delivered via mobile apps and messaging platforms.
- Wearables and sensor‑based approaches (e.g., heart‑rate–informed monitoring) and models that combine questionnaire, language, and even brain‑imaging data are being tested to flag risk and tailor support sooner.
By the numbers
- The review started with 2,638 records; only 15 studies met criteria after screening. The PRISMA‑style flow diagram (Fig. 1, p. 6) shows 1,471 records after de‑duplication, 80 full‑text articles assessed, and 15 included. That small evidence base—and varied methods—limits how broadly we can generalize today’s results.
Why it matters
- AI tools can extend support between visits and in places with limited clinical resources, offering immediate, anonymous, and personalized help. Studies included in the review suggest people disclose feelings to chatbots at levels comparable to human interactions—one reason these tools may help reduce barriers to first‑step support.
The catch
- Quality varied: several studies were rated only moderate, underscoring a need for larger, more rigorous trials and clearer reporting. The authors also flag ethical issues—privacy, data security, and algorithm transparency—and emphasize co‑design with patients and clinicians to build trust and ensure fair access.
Looking ahead
- If developed responsibly, AI could help spot problems earlier (even in prodromal stages) and support more precise, tailored interventions—complements to, not replacements for, professional care. Expect future research to focus on head‑to‑head trials, stronger safeguards for data, and inclusive design so benefits reach diverse communities.
Source: “The application of artificial intelligence in the field of mental health: a systematic review,” BMC Psychiatry (2025).
Dehbozorgi, R., Zangeneh, S., Khooshab, E., Nia, D. H., Hanif, H. R., Samian, P., Yousefi, M., Hashemi, F. H., Vakili, M., Jamalimoghadam, N., & Lohrasebi, F. (2025). The application of artificial intelligence in the field of mental health: a systematic review. BMC psychiatry, 25(1), 132. https://doi.org/10.1186/s12888-025-06483-2
Editor’s note: This article is for information only and is not a substitute for professional medical advice.