Papers
Topics
Authors
Recent
Search
2000 character limit reached

LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres

Published 26 Aug 2025 in cs.CR | (2508.18947v2)

Abstract: The integration of LLMs into Security Operations Centres (SOCs) presents a transformative, yet still evolving, opportunity to reduce analyst workload through human-AI collaboration. However, their real-world application in SOCs remains underexplored. To address this gap, we present a longitudinal study of 3,090 analyst queries from 45 SOC analysts over 10 months. Our analysis reveals that analysts use LLMs as on-demand aids for sensemaking and context-building, rather than for making high-stakes determinations, preserving analyst decision authority. The majority of queries are related to interpreting low-level telemetry (e.g., commands) and refining technical communication through short (1-3 turn) interactions. Notably, 93% of queries align with established cybersecurity competencies (NICE Framework), underscoring the relevance of LLM use for SOC-related tasks. Despite variations in tasks and engagement, usage trends indicate a shift from occasional exploration to routine integration, with growing adoption and sustained use among a subset of analysts. We find that LLMs function as flexible, on-demand cognitive aids that augment, rather than replace, SOC expertise. Our study provides actionable guidance for designing context-aware, human-centred AI assistance in security operations, highlighting the need for further in-the-wild research on real-world analyst-LLM collaboration, challenges, and impacts.

Summary

  • The paper reveals that 93% of analyst queries align with cybersecurity competencies, emphasizing LLMs’ role as cognitive aids.
  • The paper employs a five-phase thematic analysis of 3,090 queries over 10 months to uncover real-world analyst-LLM interactions.
  • The paper advocates for embedding LLM functionalities in SOC dashboards to reduce cognitive load while preserving human decision-making.

Human-AI Collaboration in Security Operations Centres: Empirical Insights

Introduction

In the study "LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres," researchers investigate the integration of LLMs in Security Operations Centres (SOCs). The study analyzes 3,090 queries from 45 analysts within a 10-month period to understand how LLMs function as cognitive aids rather than decision-makers. The research reveals the relevance of LLM usage aligned with cybersecurity competencies and provides insights into SOC workflows enhanced by LLMs. Figure 1

Figure 1: SOC workflow, focus of study and our insights.

Methodology and Data Collection

The study employed a five-phase approach to analyze SOC analysts' interactions with LLMs, focusing on familiarization with data, code generation, theme identification, and reporting. Analysts submitted queries to GPT-4 during live investigations over 10 months, revealing task priorities and engagement patterns. The study utilized thematic analysis to assess these interactions, uncovering rich insights into the role of LLMs as on-demand aids to augment situational awareness and streamline operational tasks. Figure 2

Figure 2: The five-phased approach to analyze and understand SOC analysts' interactions with LLMs.

Key Findings

Usage Patterns and Task Engagement

The study highlights that 93% of analyst queries align with established cybersecurity competencies, underscoring the practical relevance of LLMs in SOC-related tasks. Usage trends show a transition from exploratory to routine integration, driven by task types including command interpretation and technical communication enhancement. Analysts predominantly engaged LLMs for functional understanding, text processing, and command analysis, reflecting diverse cognitive needs and the LLM's role as an interpretive and drafting assistant. Figure 3

Figure 3: (Left) Query volumes vary across analysts, with heavy concentration among a few users. (Right) Overall, there is growing integration into workflows, but mostly driven by a subset of analysts (March 2024 has only 7 days of data).

Analyst-Level Analysis

Engagement with LLMs varied, with some analysts showing intense usage, indicating deep integration into workflows, particularly for command and text interpretation. Analysts adapted LLMs to specific needs ranging from code debugging to document refinement. The flexibility of LLMs allowed analysts to manage diverse task demands efficiently without extensive prompt engineering. Figure 4

Figure 4: Number of queries per analyst (ordered by activity) and task theme (ordered by frequency). Analyst clusters are color-coded: red for most active, green for moderate, and blue for low-usage analysts.

Implications for SOC Design and AI Collaboration

The study advocates for embedded LLM functionalities within SOC dashboards to facilitate seamless micro-task integration, thus reducing cognitive load and preserving analyst decision authority. By surfacing evidence rather than recommendations, LLMs can align better with analysts' preference for maintaining final judgment, enhancing trust and interpretive efficiency. Future SOC systems should account for these preferences to optimize human-AI collaboration.

Conclusion

The study concludes that LLMs act as flexible, on-demand cognitive aids in SOCs, augmenting rather than replacing analyst expertise. By effectively integrating LLMs for interpretive and communicative tasks, SOCs can enhance operational efficiency and situational awareness. The findings provide empirical guidance for designing context-aware, human-centered AI assistance, representing a significant contribution to understanding real-world analyst-LLM dynamics in security environments.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 10 likes about this paper.