New on Yahoo

Editions
© 2025 All rights reserved.
Advertisement
Advertisement
Advertisement
TechRadar

Prompt injection attacks might 'never be properly mitigated' UK NCSC warns

Sead Fadilpašić

When you buy through links on our articles, Future and its syndication partners may earn a commission.

 Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence. .
Credit: Shutterstock/SomYuZu
  • UK’s NCSC warns prompt injection attacks may never be fully mitigated due to LLM design

  • Unlike SQL injection, LLMs lack separation between instructions and data, making them inherently vulnerable

  • Developers urged to treat LLMs as “confusable deputies” and design systems that limit compromised outputs

Prompt injection attacks, meaning attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions inside user-provided content, might never be properly mitigated.

This is according to the UK’s National Cyber Security Centre’s (NCSC) Technical Director for Platforms Research, David C, who published the assessment in a blog assessing the technique. In the article, he argues that many compare prompt injection to SQL injection, which is inaccurate, since the former is fundamentally different and arguably more dangerous.

Advertisement
Advertisement

The key difference between the two is the fact that LLMs don’t enforce any real separation between instructions and data.

Catch the price drop- Get 30% OFF for Enterprise and Business plans

The Black Friday campaign offers 30% off for Enterprise and Business plans for a 1- or 2-year subscription. It’s valid until December 10th, 2025. Customers must enter the promo code BLACKB2B-30 at checkout to redeem the offer.View Deal

Inherently confusable deputies

“Whilst initially reported as command execution, the underlying issue has turned out to be more fundamental than classic client/server vulnerabilities,” he writes. “Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt."

Advertisement
Advertisement

Prompt injection attacks are regularly reported in systems that use generative AI (genAI), and are the OWASP’s #1 attack to consider when ‘developing and securing generative AI and large language model applications’.

In classical vulnerabilities, data and instructions are handled differently, but LLMs operate purely on next-token prediction, meaning they cannot inherently distinguish user-supplied data from operational instructions. “There's a good chance prompt injection will never be properly mitigated in the same way,” he added.

The NCSC official also argues that the industry is repeating the same mistakes it made in the early 2000s, when SQL injection was poorly understood, and thus widely exploited.

But, SQL injection was ultimately better understood, and new safeguards became standard. For LLMs, developers should treat them as “inherently confusable deputies”, and thus design systems that limit the consequences of compromised outputs.

Advertisement
Advertisement

If an application cannot tolerate residual risk, he warns, it may simply not be an appropriate use case for an LLM.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Advertisement
Advertisement