Do loose prompts sink ships?

The UK National Cyber Security Centre published an article titled “ChatGPT and large language models: what’s the risk?”.

The main risk highlighted is AI operators gaining access to our queries. But they also touched on the potential benefit (and risk!) to cyber criminals using an LLM as a “phone-a-friend” during a live network intrusion:

LLMs can also be queried to advise on technical problems. There is a risk that criminals might use LLMs to help with cyber attacks beyond their current capabilities, in particular once an attacker has accessed a network. For example, if an attacker is struggling to escalate privileges or find data, they might ask an LLM, and receive an answer that’s not unlike a search engine result, but with more context. Current LLMs provide convincing-sounding answers that may only be partially correct, particularly as the topic gets more niche. These answers might help criminals with attacks they couldn’t otherwise execute, or they might suggest actions that hasten the detection of the criminal. Either way, the attacker’s queries will likely be stored and retained by LLM operators.

If your organisation has problematic IT to patch or secure, you might exploit this as a defender…

Place yourself in the shoes of a “lucky” script kiddie who gained a foothold on your enterprise network. Run nmap or some other popular network scanning tool on your internal network and find the network service banners for those hard-to-protect services. Next, ask ChatGPT if it can fingerprint the underlying technology and, if so, what network attacks it proposes. If the AI hallucinates, you may get some funny attack suggestions. But those suggestions become potential network detection signatures.

“The SOC has detected threat ChatFumbler attempting a poke when they should peek”...

Related Posts

Get Daily AI Cybersecurity Tips