SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Hooded figure laptop digital code malware icons shadowy figures drones cyber warfare

AI & cyber warfare set to reshape security threats by 2026

Mon, 1st Dec 2025

Global cybersecurity consultancy CyXcel has outlined its predictions for the security landscape in 2026, warning of the increasing weaponisation of cyberspace in state conflicts, as well as potential shifts caused by artificial intelligence in the development of malware accessible to a broader range of attackers.

Cyber conflict

According to Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, the role of digital threats in warfare has fundamentally changed since the conflict in Ukraine. Cyber tactics, she said, now extend well beyond traditional uses of information disruption.

"Before 2022, the role of cyber in full-scale warfare was mostly academic, with few real-world examples of interstate conflicts. Russia's assault on Ukraine altered that landscape, demonstrating that cyberspace has become an active battleground in its own right - one that now reaches far beyond the traditional informational element of warfare," said Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk, CyXcel.

She noted that the application of cyber in warfare will remain situation-specific but is set to become more established as a tool for achieving government objectives covertly. Activities already observed include espionage, manipulation of information, targeted drone activity, and attacks on infrastructure.

She highlighted the potential for further evolution as technology develops, particularly with advances in artificial intelligence and quantum computing.

"The future impact of cyber on warfare will vary by conflict and opponent. However, in 2026, we will continue to see cyber tools being used to help nation-states pursue strategic aims without firing a shot - allowing plausible deniability, avoiding escalation and enabling activities such as espionage, information operations, drone intrusions and targeted disruption of infrastructure. Yet as global tensions rise and technologies like AI and quantum computing mature, we are likely to see states exploring and successfully discovering new uses for cyber weaponry - likely in novel and unforeseen ways," said Kumar.

AI-driven threats

Jim Salter, Senior Management Consultant at CyXcel, pointed to the evolving capabilities of artificial intelligence for code generation as a likely catalyst for future security threats. While AI is currently unreliable for creating complex cyber threats, rapid progress in data and sophistication is likely to lower the technical barrier required to create damaging software.

Salter suggested that this progress could lead to significant changes in how security teams approach risk, with the possibility that those without traditional hacking skills may soon be able to produce disruptive malware using AI tools by 2026.

"The use of AI to generate code is still quite inconsistent, and while current systems can handle basic tasks, they're far from capable of producing complex malware. However, as we move into 2026 and training data grows and AI code generation becomes more sophisticated, less-skilled threat actors will almost certainly gain the ability to generate more dangerous malware," said Jim Salter, Senior Management Consultant, CyXcel.

Salter emphasised that the emergence of accessible AI-driven malware could broaden the threat landscape, with insiders becoming an increasing concern. He outlined scenarios where employees or contractors with system access, even those lacking specialist knowledge, could inflict substantial damage if they gain access to advanced AI malware tools.

"And if AI tools make it possible for individuals with very little technical background to generate highly disruptive malware, the security landscape could change dramatically. Traditionally, organisations have focused their defences on external threat actors, for example, cybercriminal groups, state-sponsored hackers and others with the skills to mount complex attacks. However, if powerful malware becomes accessible to anyone who can write a prompt, the barrier to entry collapses.

"In that scenario, insiders, such as employees, contractors or partners who already have legitimate access to systems, become a far greater concern. They may not need specialised knowledge or external support to cause serious damage. A disgruntled employee, someone under financial pressure or even an insider manipulated through social engineering could leverage AI-generated malware to sabotage operations, steal data or cripple critical infrastructure from within," said Salter.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X