SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Cinematic circuitry padlock over data center racks digital network

AI’s 2026 security fallout: identity chaos & deepfake fear

Wed, 14th Jan 2026

Grayson Milbourne, Security Intelligence Director, OpenText Cybersecurity

  • Relaxed Agentic AI Access will Trigger the Next Identity Crisis: Experts predict agentic identities will outnumber human ones by 100 to 1, each operating independently, making decisions, and often accessing data critical data to be efficient. Most organizations aren't ready for this level of identity sprawl. In the 2026 rush to deploy AI agents, many will over-permission agents or skip proper guardrails altogether. This will lead to a new wave of breaches where AI is tricked into sharing data, performing unauthorized tasks, or opening doors for attackers. Cybersecurity starts with identity. Those who fail to modernize their IAM strategies for agentic AI and shortcut access permissions will face a rise in security risk and operational chaos.
     
  • In-person communication will rise in popularity as cybersecurity's last line of defense: Deepfakes have reached a new level of sophistication. With synthetic voices and video now indistinguishable from the real thing, attackers can impersonate anyone in real time. In response, organizations will reintroduce traditional trust-building tactics. Executives will meet in person for high-stakes decisions, "safe words" will return as verification tools, and face-to-face will regain its value. In a world where we can no longer trust what we see and hear online, physical presence will become a new pillar of security strategy.

Tyler Moffitt, Senior Security Analyst, OpenText Cybersecurity

  • The true cost of AI for SMBs will be decision overload: In 2026, the greatest impact of AI on small and mid-sized businesses will go beyond attack sophistication to the volume of security decisions they're suddenly forced to make. Scammers will use generative AI to craft realistic messages, calls, and videos that feel personal and urgent – from deepfakes and voice cloning to fake invoices that mirror trusted communications almost perfectly. 

    The biggest threat won't be a single breach, but the risk of IT teams – often already stretched thin – being drained by the constant influx of alerts and gray-area threats that demand attention. As both noise and legitimate risks increase, SMBs will need to lean on automation and identity-driven controls to separate malicious activity from real communication, turning AI from an attack method into a defender's equalizer.

Maria Pospelova, Principal Data Scientist, Senior Manager of AI & Data Science, OpenText Cybersecurity

  • Human trust in GenAI prompts will become a leading cause of data leaks: People increasingly treat AI assistants like trusted collaborators, which encourages them to share sensitive information without thinking. At the same time, shadow AI usage already exposes organizations to unmanaged risk, as employees rely on unapproved tools that operate outside security controls. Even more concerning, the prompts shared between GenAI systems or AI agents are rarely monitored, despite often containing proprietary or confidential information. In 2026, we will need to start treating prompts as data transfers rather than harmless text inputs to prevent an accelerating wave of AI-driven data leaks.
     
  • AI will fuel a new era of collaboration between cyber defenders: We used to say data is the new oil; in 2026, it will become clear that insights are the new fuel. AI, particularly agentic AI, will allow experts to replicate and share their skills and insights wherever they are needed, extending their reach across teams, organizations, and even industries. This transformation will be driven both by technology and by necessity, because our adversaries already collaborate and exchange tools and tactics freely. In 2026, defenders will begin to match that level of cooperation, using AI to connect experts, accelerate intelligence sharing, and build a stronger, collective defense ecosystem.

Mike DePalma, VP of Business Development at OpenText Cybersecurity

  • AI Security Tools Will Outpace SMBs' Ability to Use Them
    AI-powered security products will be pushed hard in 2026. Many will promise plug-and-play protection, but the reality will look different. Most SMBs won't have the staff or experience to manage them well. Some will misconfigure tools. Others will skip monitoring. A few will deploy systems they don't fully understand. That gap between having protection and actually using it properly will grow wider. Attackers will take advantage. The result will be more missed alerts, failed defenses, and damage that should have been avoidable.
     
  • Voice Cloning Attacks Will Break Trust in Everyday Communication
    Phone scams used to be easy to spot. That's changing fast. With cheap tools and short samples, attackers can now mimic anyone's voice - bosses, partners, family members. They sound real. They call from familiar numbers. They create urgency. And they get people to act. These attacks are working because people trust what they hear. In 2026, that trust will break down. More SMBs will fall for scams that feel personal. The only way to keep up is to train people differently. Tech helps, but awareness is the only thing that stops someone from wiring money to a fake voice.