Human-Centered Security Risk
- Human-centered security risk is defined as harm from exploiting cognitive, behavioral, and social vulnerabilities in human-technology interactions, incorporating subjective values and lived experiences.
- It employs mixed methodologies, combining qualitative community engagement with quantitative risk scoring to reveal complex situational hazards beyond traditional models.
- This approach emphasizes ethical principles like care, justice, and autonomy, driving co-designed interventions for safer, more equitable security practices.
Human-centered security risk denotes the set of security risks arising from cognitive, behavioral, and social factors that affect how individuals or communities perceive, are exposed to, and respond to threats. This risk is embedded in the interaction between people and technology, extending beyond technical system vulnerabilities to include lived experiences, values, and organizational and societal contexts. Contemporary research recognizes that effective risk management and threat modeling must explicitly integrate participant expertise, situated harms, and the ethical imperative to center values such as care, justice, and autonomy (Usman et al., 16 Nov 2025).
1. Core Definitions and Conceptual Distinctions
Human-centered security risk is defined as the probability and impact of harm resulting from an adversary exploiting human vulnerabilities—cognitive, emotional, social, or environmental—rather than (or in addition to) technical flaws. Formally, risk is often captured as:
where vulnerability is contextual, including personality, knowledge gaps, emotional states, and sociotechnical positioning (Papatsaroucha et al., 2021). This approach shifts the modeling focus from standardized system artifacts to participant narratives, community norms, and the multidimensional space of harm (technical, emotional, relational, societal) (Usman et al., 16 Nov 2025).
Contrasts with traditional systems-based security are stark. Conventional models (e.g., STRIDE, DREAD) generalize risks by classifying technical faults. Human-centered approaches instead prioritize the heterogeneity of threat perception, acknowledging that marginalized or at-risk groups may face intersectional harms unaddressed by technical taxonomies (Usman et al., 16 Nov 2025). Quantitative risk formulas remain relevant but are enriched by qualitative dimensions and ongoing participant involvement.
2. Methodological Foundations and Assessment Frameworks
Human-centered security risk frameworks employ multi-phase, iterative processes:
- Groundwork: Building trust and understanding contexts through deep engagement (e.g., community workshops, domain expert partnerships, reverse engineering of technologies used by target communities). This foundational work is continuous and prior to any formal threat assessment (Usman et al., 16 Nov 2025).
- Threat Elicitation: Employing behavior-focused interviews or prompts (e.g., “Describe a time you felt threatened using X”) instead of abstract concepts, and synthesizing results with structured taxonomies where needed (Usman et al., 16 Nov 2025).
- Integrated Quantitative-Qualitative Risk Scoring: While classical formulas,
are still used, parameterization is derived from lived experiences, empirical observations, and community data (Usman et al., 16 Nov 2025).
Specific instruments from personality psychology, decision theory, and social engineering research are leveraged for individual-level risk scoring:
- Big Five/FFM vectors
- Dark Triad/Dirty Dozen scales
- Protection Motivation Theory (PMT)
- Heuristic-Systematic Model (HSM) These measurement models enable both one-time and continuous human vulnerability assessment, with dynamic update equations accommodating new training or emergent threats (Papatsaroucha et al., 2021).
3. Guiding Values and Ethical Principles
Human-centered threat modeling is intrinsically value-driven. Key principles include:
- Care: Research, interventions, and system designs must benefit participants as much as (or more than) those conducting the work, operationalized through participant co-design and tangible value production.
- Justice: Structural inequities are surfaced, and disproportionate risks to marginalized groups (e.g., refugees, LGBTQ+, sex workers) are explicitly examined and prioritized.
- Autonomy: Agency and self-determination of participants shape what data is shared, how threats are recognized, and what mitigation strategies are feasible.
- Reflexivity and Humility: Researchers acknowledge outsider status, avoid exoticizing participants, and adapt methods as researcher and participant values co-evolve. Value-led priorities define threat identification, assignment of urgency, and the mitigation strategies pursued, often eschewing “one-size-fits-all” technical solutions in favor of co-developed interventions (Usman et al., 16 Nov 2025).
4. Structural, Methodological, and Ethical Challenges
Human-centered security risk research and practice face distinctive obstacles:
- Emotional/Psychological Strain: High incident exposure can induce vicarious trauma among researchers and practitioners (e.g., work on technology-facilitated abuse), resulting in burnout and project attrition (Usman et al., 16 Nov 2025).
- Methodological Dilemmas: Reliance on self-reporting yields issues of recall and social desirability bias; threat unawareness among participants may obscure real risks; and ethical ambiguities complicate intervention (e.g., should researchers proactively disclose new vulnerabilities to users?) (Usman et al., 16 Nov 2025, Papatsaroucha et al., 2021).
- Structural Barriers to Impact: Academia rewards novel publications rather than maintenance or real-world tool deployment, and short funding cycles prevent iterative system refinement or community collaboration (Usman et al., 16 Nov 2025).
- Evaluation Constraints: Peer-review expectations for large datasets and positivist metrics often marginalize the rich, context-sensitive insights central to HCSR (Usman et al., 16 Nov 2025).
- Practitioner-Researcher Gap: Translating nuanced, participant-centered threat models into actionable recommendations for platform designers, policymakers, and end-users remains a persistent bottleneck (Usman et al., 16 Nov 2025).
5. Applications: Metrics, Models, and Case Domains
Human-centered security risk models have been operationalized across diverse contexts:
- Humanoid Robotics Security: A seven-layer risk model scores 39 attack vectors and 35 defenses, with explicit mapping of cross-layer impacts on privacy and safety. Risk-weighted scoring and Monte Carlo simulation expose how deficits in application-layer controls or real-time user consent can lead to high-probability harms (e.g., eavesdropping, social engineering at the social-interface layer) (Surve et al., 24 Aug 2025).
- GUI Agent Risk Assessment: Human-centered risk metrics (e.g., ) are embedded in evaluation workflows that prioritize in-context user consent, structured privacy prompts, and systematic tracking of usability-to-risk trade-offs (Chen et al., 24 Apr 2025).
- Open Data Disclosure: Red-teaming combined with visual analytic workflows surfaces persistent risks from re-identification and composable quasi-identifier joins, pushing for continuous defender-in-the-loop risk calibration (Bhattacharjee et al., 2023).
- Biometric Systems: Integration of attacker motivation (via conjoint analysis and conditional logits) into classic risk formulas enables deployment configurations to be compared by C_identify, reflecting real-world trade-offs between technical detection thresholds and psychological deterrence (Ohki et al., 2024).
6. Recommendations and Paths Forward
To advance the effectiveness and translation of human-centered security risk research into practice:
- Shared Infrastructure: Creation of long-lived toolkits, collaborative platforms, and persistent knowledge bases for academic and practitioner use (Usman et al., 16 Nov 2025).
- Recognition Mechanisms: Institutional support for incremental, community-focused, and “failure-first” contributions, including specialized tracks for action-oriented HCTM (Usman et al., 16 Nov 2025).
- Translation Strategies: Development of actionable, accessible outputs (briefs, wireframes, slide decks) and engagement with stakeholder organizations to facilitate operational uptake (Usman et al., 16 Nov 2025).
- Policymaking Integration: Formulation of theoretically grounded models that articulate human-centered risks in language legible to regulators, enabling legislation aligned with lived realities (Usman et al., 16 Nov 2025).
- Community Partnerships: Sustained collaboration with NGOs, service clinics, community educators, and journalists to extend reach, validate interventions, and amplify under-recognized threats (Usman et al., 16 Nov 2025).
- Pluralism in Methods: Institutionalization of methodological diversity that values qualitative and participatory inquiry alongside quantitative measurement (Usman et al., 16 Nov 2025).
| Domain | Human-Centered Risk Focus | Intervention Principle |
|---|---|---|
| Humanoid Robotics | Trust/safety in human-robot interface | Layered defense, user confirmation |
| LLM GUI Agents | Privacy in agent-mediated interaction | In-context prompts, privacy-by-design |
| Open Data Disclosure | Re-identification via QIDs | Continuous risk monitoring |
| Vulnerable Populations | Cultural, cognitive inclusivity | Participatory/value-sensitive design |
A human-centered security risk paradigm redefines risk management in sociotechnical systems by emphasizing iterative engagement, ethical reflexivity, and actionable collaboration. The transition from artifact-centric models to deeply situated, value-driven assessment enables safer, more equitable, and more effective security praxis across technical, organizational, and policy layers (Usman et al., 16 Nov 2025).