SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Cybersecurity analysts monitoring digital threats ai assisted operations centre

AI helps cybersecurity analysts focus on complex threats, study shows

Fri, 29th Aug 2025

A study conducted over ten months has provided new insights into how artificial intelligence, specifically large language models, can support cybersecurity analysts in their daily work.

Researchers from CSIRO, Australia's national science agency, analysed a long-term trial involving 45 cybersecurity professionals at eSentire's Security Operations Centres located in Ireland and Canada. During the study period, these analysts interacted with ChatGPT-4, asking more than 3,000 questions that related chiefly to routine and lower-risk tasks such as interpreting technical alerts, editing text and analysing malware code.

The data collected was anonymised and aimed to document the impact of introducing large language models as tools within a live security operations centre environment.

Human-AI collaboration

Dr Mohan Baruwal Chhetri, Principal Research Scientist at CSIRO's Data61, stated that the study demonstrates how artificial intelligence tools can integrate into analysts' regular workflows to augment, rather than supplant, the judgement and expertise of human security professionals.

"ChatGPT-4 supported analysts with tasks like interpreting alerts, polishing reports, or analysing code, while leaving judgement calls to the human expert," Dr Baruwal Chhetri said.

He observed that this mode of collaboration between human and AI adapts to individual users' requirements, which in turn fosters greater trust in the technology and allows analysts to focus on more complex tasks.

"This collaborative approach adapts to the user's needs, builds trust, and frees up time for higher-value tasks."

The trial formed part of CSIRO's Collaborative Intelligence (CINTEL) programme, which researches human-AI partnerships across different sectors, with a particular focus on improving both organisational performance and the wellbeing of workers in high-stress environments such as cybersecurity, where analyst fatigue remains a significant concern.

Impact in the security operations centre

Security operations centre teams typically contend with increasing volumes of system alerts, a considerable number of which can prove to be false positives. This challenge risks missed or delayed responses to genuine threats and can contribute to reduced productivity and burnout among experienced professionals.

Dr Baruwal Chhetri suggested that the benefits of human-AI collaboration could extend to other sectors that experience similar pressures, such as emergency response services and healthcare, where effective decision support can ease workloads and support staff wellbeing.

Use patterns and autonomy

Dr Martin Lochner, Data Scientist and Research Coordinator, explained that this investigation is the first study of its length and industrial scale to show how large language models are used in actual operational environments by security professionals. He indicated that the approach taken combined insights from both academic research and industry practice.

"This collaboration uniquely combined academic rigor with industry reality, producing insights that neither pure laboratory studies nor industry-only analysis could achieve," Mr Lochner said.

One of the most notable findings concerned the kind of requests analysts made of ChatGPT-4. According to the study, only 4 percent of all prompts asked for a direct binary judgement, such as querying whether something was malicious. Most requests instead sought factual information, supporting evidence, or contextual data to assist analysts in reaching their own independent conclusions.

"For instance, we found that only four per cent of analyst requests to ChatGPT-4 asked for a direct answer, such as 'is this malicious?'. Instead, analysts preferred receiving evidence and context to support their own decision making.
"This highlights the value of LLMs as decision-support tools that enhance analyst autonomy rather than replace it."

Future research directions

The research team proposes to build upon the initial ten months of observations with a further two-year investigation. The next phase is designed to examine how analysts' use of large language models like ChatGPT-4 changes over an extended period. Researchers intend to include qualitative feedback from the analysts themselves, comparing their experiences with system log data to better understand how AI tools can contribute to improved productivity and refined usage in operational environments.

The results from this longer-term study are expected to guide improvements in AI tool design for security teams and inform broader adoption strategies within cybersecurity and other high-intensity domains.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X