SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Theodora

Generative AI fuels alarming surge in intimate image abuse

Mon, 2nd Mar 2026

The rapid proliferation, affordability and accessibility of generative AI systems has heavily influenced the growth of intimate image abuse. Multi-modal Foundation Models have lowered barriers to creating more realistic images that do harm. Open-source models present especially concerning risks as they can be downloaded by users from the internet, run on their local resources, and be fine-tuned specifically for creating intimate image abuse, circumventing the safeguards that might exist in commercial models. One security company estimates that explicit deepfakes increased fourfold over the span of a year from 2022 to 2023. And, a global survey noted that 7% of respondents were a survivor of IIA, with a further 39% of respondents knowing someone who was a survivor. 

Humane Intelligence, in partnership with the UK Foreign Commonwealth Development Office, and UK Department for Science, Innovation, and Technology, and with support of the Global Partnership for Action on Gender-Based Online Harassment and Abuse, examined the current state of how IIA is manifesting globally, surveyed the landscape of socio-technical challenges and the role that groups such as regulators, civil society and platforms can play to address this pernicious and growing issue. The full report is viewable here, and described below. 

Analyzing four key actors 

Generative AI companies are a major stakeholder in this space. Open-sourced models can be downloaded and used with little to no guardrails. Insufficient guardrails on generative AI tools have dramatically lowered barriers to creating harmful content. There are limited reporting options and lack of standards across platforms. 

Social media platforms also play a critical role. In a survey, over half the survivor/victims who submitted reports to platforms did not receive any update, and 50% of users who did receive responses were informed that their intimate images did not violate policies. Platform response times vary from hours to weeks. These systems typically force victims-survivors to report the same image multiple times across different platforms. 

Third-party tools and non-governmental organisations are also a major stakeholder. One non-governmental organisation in Colombia reported receiving over 2,000 requests for support from survivors of intimate image abuse, double the volume handled by state resources (1,000), while operating with limited staff and funding. They play a crucial role in filling gaps left by social media platforms. And, they face significant capacity challenges despite being critical support mechanisms. 

Last, the regulatory landscape is a major stakeholder. Recent mapping identified that 110 countries have no relevant legislations, 67 countries have sufficient laws, and 18 countries have "insufficient" laws on this issue. Further, the cross-border nature of intimate image abuse creates significant jurisdictional challenges for enforcement.

Findings from in-country workshops in Nigeria and Colombia 

We hosted workshops with 40 participants representing diverse actors involved in intimate image abuse. We adopted a multi-stakeholder approach, engaging policymakers, law enforcement, psychosocial support professionals, digital safety experts, civil society, NGOs, and regulatory entities. Our key finding are: 

  1. Pitfalls of technology-based support: The workshop attendees highlighted that victims-survivors frequently express discomfort with fully automated systems, citing the absence of human connection during moments of crisis and vulnerability. And, technological solutions often presume consistent internet access and digital literacy, marginalising rural users. 
  2. Awareness gaps: Victims-survivors often do not recognise their experiences as abuse due to a normalisation of online violence. Further, many law enforcement agencies, particularly in rural areas, lack the technical capacity to take on these cases. 
  3. Technical collection challenges: Shared devices create significant privacy and safety risks when seeking help, as perpetrators may have access to the same device. And, ephemeral messaging features and auto-deletion of content create significant barriers to preserving evidence. 
  4. Reporting effectiveness: Platform response times for intimate image abuse reports are frequently excessive, with many victims-survivors never receiving responses. And, organisations reported challenges in escalating cases to messaging platforms specifically, with no effective reporting channel. 
  5. Legislative gaps: Nigeria lacks comprehensive legislation specifically criminalising non-consensual sharing of intimate images. Colombia's existing law lacks specific provisions for intimate image abuse. 

We conclude with four key recommendations: 

  1. Prevention: Platforms should implement safety-by-design principles and proactive safety measures. 
  2. Education: Multi-stakeholder awareness campaigns targeting diverse audiences.
  3. Standardisation: Industry-wide standards for responding to and removing intimate images.
  4. Survivor-Centric Design: Put survivors at the forefront with accessible reporting and safety-by-default workflows. 

Calls to action include: 

  • Technology works best when it functions as an initial access point to guide victims-survivors toward appropriate human-centred services rather than as a standalone solution.
  • Global coordination is essential for addressing cross-border crimes. 
  • Substantial investment is needed to expand non-governmental organisation capacity and effectiveness.