Responsible, ethical, and legal use of AI-enabled decision support systems in the military

Determine the principles and conditions necessary to ensure the responsible, ethical, and legal use of AI-enabled decision support systems (AI-DSS) in military operations.

Background

The paper surveys AI-enabled decision support systems (AI-DSS) used across military functions, noting that while such systems can enhance intelligence processing and targeting support, they also implicate serious ethical, legal, and operational challenges. These include exacerbation of cognitive biases, framing effects, and risks related to offloading morally weighty decisions to automated recommendations.

Although design and institutional mitigations have been proposed in the literature, the authors emphasize that many issues remain unresolved. They explicitly identify that questions surrounding how to use AI-DSS responsibly, ethically, and legally in military contexts are still open, motivating the need for more precise frameworks and governance approaches.

References

Some of these problems have seen proposed solutions, but there remain many open questions regarding the responsible, ethical, and legal use of AI-DSS in the military domain.

Stop Saying "AI"  (2602.17729 - Wood et al., 18 Feb 2026) in Section 2, Decision Support