Securing AI

January 13, 2026

Why Cyber Security Thinking Needs To Evolve

AI is becoming part of core business infrastructure, moving beyond a helpful productivity add-on. Organisations are increasingly embedding AI into everyday tools and workflows, connecting to business data, automation, and decision-making. As adoption accelerates, businesses need to ensure their approach to cybersecurity keeps pace.


Across the industry, there is growing recognition that traditional cyber security frameworks were designed for predictable, rules-based software environments. Generative AI behaves differently. It processes context, combines information dynamically, and can act in ways earlier systems could not. This introduces new types of security considerations that classic frameworks were not built to address.


Lessons from EchoLeak


In 2025, the EchoLeak vulnerability in Microsoft 365 Copilot highlighted this emerging reality. The zero-click AI flaw (CVE‑2025‑32711) showed how Copilot’s behaviour could become part of the attack surface, with risk arising from how the system interpreted and processed crafted content rather than a user clicking a malicious link. This is an example of an indirect prompt-injection style attack. Microsoft issued fixes and guidance, but the incident demonstrated how AI-specific behaviours can create new avenues of risk that organisations need to understand.


This sits within a broader industry trend. Attackers are adapting, expanding their focus from targeting only users to also targeting AI models themselves. These systems are increasingly woven into business logic, privileged data access, identity frameworks, and critical processes. As AI becomes embedded into tools, workflows, and data environments, security research and threat activity are increasingly exploring how AI systems can be influenced, misused, or manipulated.


More tools isn’t the only answer


Organisations need to adapt too. Adapting to this requires a change in mindset, not just a bigger software budget. Culture and practice must come first, supported by the right technical safeguards.


Unlike traditional software, AI systems reason and interact in non-deterministic ways. To manage this, organisations need to address specific emerging risks:


Prompt Manipulation: Influencing an AI’s output to bypass safety filters or expose internal logic.

Contextual Data Exposure: AI accidentally revealing sensitive information to users who should not have access to a particular data source.

Autonomous Agency (AI agents with write access): Risks that arise when AI is granted write access to systems, such as sending emails or moving files, without human oversight.

Machine Identity: Managing the permissions of an AI agent as if it were an employee.


Building resilience for 2026


Visibility is the first step. You cannot secure what you haven’t mapped.



1. Audit the AI footprint: Identify which departments are using AI and where “Shadow AI” tools have crept into workflows.

2. Map data connections: Understand exactly which data sources, tools, and processes your AI systems can access.

3. Govern permissions: Apply the principle of least privilege. Does your AI really need access to the entire company SharePoint, or just a specific folder?


Ensuring your cybersecurity strategy evolves with AI adoption, rather than retrofitting protections later, builds a resilient foundation for the years ahead.


Strengthen your foundations


If you are reviewing your cybersecurity approach for 2026, you can take our Cyber Security Assessment and download our 10 Cyber Tips for 2026 pdf. Both provide practical steps to strengthen core cybersecurity. Links below


Taylored Solutions Cyber Security Assessment

Taylored Solutions SME Cyber Security 2026


Further Reading


Codebridge - How Has Generative AI Affected Security

Harvard Business Review - Research: Conventional Cybersecurity Won't Protect Your AI

The Hacker News - Zero Click AI Vulnerability Exposes M365

Security Week - EchoLeak AI Attack

Cybersecurity Dive - Critical Flaw in Microsoft Copilot

Infosecurity Magazine - Microsoft 365 Copilot: New Zero-Click AI Vulnerability Allows Corporate Data Theft

GenAI OWASP - LLM01:2025 Prompt Injection

People in an office meeting; one woman shakes hands with a man over a table.
By looka_production_176055138 December 30, 2025
And Why Your Business Wants One (more that you think)
By looka_production_176055138 October 19, 2025
Trota and Trotula
By looka_production_176055138 October 16, 2025
A Wake Up Call For Businesses