Papers
Topics
Authors
Recent
Search
2000 character limit reached

MAGI: Multi-Agent Guided Interview for Psychiatric Assessment

Published 25 Apr 2025 in cs.CL | (2504.18260v1)

Abstract: Automating structured clinical interviews could revolutionize mental healthcare accessibility, yet existing LLMs approaches fail to align with psychiatric diagnostic protocols. We present MAGI, the first framework that transforms the gold-standard Mini International Neuropsychiatric Interview (MINI) into automatic computational workflows through coordinated multi-agent collaboration. MAGI dynamically navigates clinical logic via four specialized agents: 1) an interview tree guided navigation agent adhering to the MINI's branching structure, 2) an adaptive question agent blending diagnostic probing, explaining, and empathy, 3) a judgment agent validating whether the response from participants meet the node, and 4) a diagnosis Agent generating Psychometric Chain-of- Thought (PsyCoT) traces that explicitly map symptoms to clinical criteria. Experimental results on 1,002 real-world participants covering depression, generalized anxiety, social anxiety and suicide shows that MAGI advances LLM- assisted mental health assessment by combining clinical rigor, conversational adaptability, and explainable reasoning.

Summary

Multi-Agent Guided Interview for Psychiatric Assessment: A Technical Overview

Recent advances in the field of AI, particularly in the use of large language models (LLMs), have opened new pathways for automating processes traditionally dependent on human expertise. One such process is psychiatric assessment, which this paper addresses by developing a framework termed MAGI (Multi-Agent Guided Interview). MAGI seeks to overcome the challenges encountered by current LLM implementations in psychiatric settings by transforming the Mini International Neuropsychiatric Interview (MINI) into an operational computational workflow facilitated by a coordinated multi-agent system.

The MAGI Framework

The core of the MAGI framework relies on four specialized agents:

  1. Navigation Agent: Adheres strictly to the branching logic inherent in the MINI protocol, ensuring the comprehensive coverage of necessary nodes without deviation from the intended diagnostic sequence. This agent mitigates the risk of LLMs producing "hallucinations" by preventing topic detours until essential diagnostic inquiries are completed.

  2. Question Agent: Tailors the interaction to sustain participant engagement while conforming to the fundamental diagnostic intents. The agent dynamically adapts its questioning strategies, offering explanatory responses for ambiguous participant remarks and empathetic reactions when emotional distress is detected.

  3. Judgment Agent: Validates participant responses against MINI criteria, employing a mechanism that iteratively refines clarification requests until valid symptom matching is secured. This agent's function is pivotal for transitioning dialogue states controlled by the navigation agent.

  4. Diagnosis Agent: Utilizes a Psychometric Chain-of-Thought (PsyCoT) reasoning paradigm. This structured approach links participant responses to DSM-5 compliant diagnostic criteria through interpretable reasoning phases, producing transparent clinical assessments.

Empirical Evaluation

The evaluation of the MAGI system was conducted through 1,002 clinical interviews, annotated by dual-expert psychologists to ensure high validity. Notably, the study's experiments demonstrated significant alignment with expert diagnoses across multiple psychiatric conditions, including depression, generalized anxiety disorder, social anxiety, and suicide risk. Noteworthy is MAGI's ability to achieve high agreement metrics with expert diagnoses, evidenced by statistics such as an increase in accuracy for specific tasks over traditional LLM approaches without structured methodologies.

Implications and Future Directions

Practically, MAGI promises to extend the accessibility of psychiatric assessment by embedding clinical rigor into automated systems, reducing bottlenecks inherent in traditional interview-driven diagnostics. Additionally, MAGI's transparent diagnostic pathway via PsyCoT introduces a level of interpretability keenly desired in clinical AI applications, fostering trust among healthcare providers.

Theoretically, the research presents a paradigm shift in leveraging LLMs not merely as conversationalists but as systemic contributors to structured diagnostic processes. Future implications of this work could see real-time integration with electronic health records, enabling longitudinal analysis of patient data for enhanced diagnostic accuracy and treatment planning.

While promising, this system remains bound by the complex nature of psychiatric evaluations, necessitating continuous refinement and validation across broader, more diverse populations. Future work may focus on enhancing emotion detection capabilities and accommodating comorbid conditions to ensure robust and scalable deployment in varied clinical settings. This research thus lays a substantial foundation for developing AI-driven mental healthcare tools, paving the way for further integration of AI in clinical psychology and psychiatry.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.