Name
#121 Responsible AI/ML in Veteran Mental Health: Strategies for Evaluation, Oversight, and Impact
Speakers
Content Presented On Behalf Of:
Other entity not listed
Session Type
Poster
Date
Tuesday, March 3, 2026
Start Time
5:00 PM
End Time
7:00 PM
Location
Prince Georges Expo Hall E
Focus Areas/Topics
Technology, Policy/Management/Administrative
Learning Outcomes
After reviewing this poster and engaging with our authors, the audience will be able to:
1. Describe key risks and opportunities associated with integrating AI/ML into Veteran mental health and suicide prevention programs.
2. Explain the human-in-the-loop approach and the importance of multidisciplinary oversight in AI/ML evaluation for safety.
3. Apply methods for evaluating AI/ML-enabled mental health programs to proactively safeguard Veteran well-being.
4. Identify practical strategies for addressing bias, equity, and privacy within AI/ML governance and evaluation frameworks.
5. Recognize actionable steps to strengthen evaluation capacity, transparency, and continuous improvement in AI/ML-enabled care for Veterans.
1. Describe key risks and opportunities associated with integrating AI/ML into Veteran mental health and suicide prevention programs.
2. Explain the human-in-the-loop approach and the importance of multidisciplinary oversight in AI/ML evaluation for safety.
3. Apply methods for evaluating AI/ML-enabled mental health programs to proactively safeguard Veteran well-being.
4. Identify practical strategies for addressing bias, equity, and privacy within AI/ML governance and evaluation frameworks.
5. Recognize actionable steps to strengthen evaluation capacity, transparency, and continuous improvement in AI/ML-enabled care for Veterans.
Session Currently Live
Description
Artificial Intelligence (AI) and machine learning (ML) systems are increasingly influential in mental health and suicide prevention programs, particularly those serving Veteran populations. As these technologies are introduced, it is important to incorporate rigorous safety practices, oversight, and evaluation. This helps identify and address any new risks or challenges to Veteran well-being that may arise from integrating AI and ML into care settings. While AI and ML assist with predictive analytics, care navigation, and outreach, they also present concerns around bias, privacy, and unintended outcomes. This poster will share practical strategies and examples from real-world implementation on how systematic evaluation identifies, mitigates, and monitors risks associated with AI/ML adoption to enable federal health leaders to harness innovation responsibly.
We will begin by outlining the expanding use cases for AI/ML in Veteran mental health highlighting applications for detecting suicide risk and personal care. The poster will explore both areas of promise and new vulnerabilities that accompany the integration of these technologies in sensitive care settings. Drawing on recent field experience, we will provide evaluation frameworks that build strong safeguards and promote confidence in digital approaches. While these technologies promise greater reach and personalization, they can also introduce new sources of bias, privacy risk, and unintended consequences if not properly governed. A central theme is placing human oversight, conceptually known as “human-in-the-loop.” Our approach focuses on multidisciplinary input—involving Veterans, clinicians, technical specialists, and ethicists—so solutions are applicable, ethical, and safe. Ongoing monitoring and transparent reporting help identify risks as they emerge and support continuous improvement over time. To illustrate the power of evidence reviews and advanced analytic methods, such as AI-driven literature reviews, we will describe conducting a large-scale, AI-assisted review of global suicide prevention training programs. We will show how to identify what works, where potential safety gaps exist, and how findings translate into actionable, evidence-based recommendations for improvement. These methods expand the knowledge base and set new standards for transparency in evaluating technology-driven interventions. We will present practical step-by-step frameworks for adopting elements in safe AI/ML governance in mental health and suicide prevention for Veterans. Key evaluation areas such as fidelity, impact, feasibility, and equity are defined clearly, alongside oversight protocols, alongside multidisciplinary review (e.g., including Veteran and clinician voices), and transparent methodology and findings. We will also acknowledge real-world challenges, including legacy bias in data and complexities around protecting participant privacy. The content will discuss real-world challenges, such as addressing legacy inequities in data and respecting participant privacy. We will demonstrate how evaluators flag emerging risks, support continuous learning, and adapt methods as technology advances. Viewers will encounter visual representations of oversight cycles, participatory engagement and continuous improvement, alongside actionable tips for growing evaluation capacity.
Audience members will walk away with strategies for strengthening AI/ML governance; recommended evaluation tools; and a deeper appreciation for the role of evidence, transparency, and multidisciplinary collaboration in sustaining Veteran-focused care.