Error messages are the new prompts

Can error messages from software teach an AI a new skill?

Once you start building with AI, you quickly realise that sending a singular prompt to an AI API and processing the response is just the start.

Just like with regular APIs, you need to chain operations: get some input from somewhere, clean it up, augment it with some other data, prompt the AI, sanity check the response, update a database record etc, etc. This has led to the development of language chain frameworks and services.

AI Agents - built using language chains - go one step further and incorporate a feedback loop. This enables the AI to dynamically adapt and learn a task. The results are impressive!

In this example, error messages are fed back into the model as part of the next prompt:

LLMs are pretty good at writing SQL, but still struggle with some things (like joins) 🤯 . But what if you use an agent to interact with SQL DBs? In the example below, it tries a join on a column that doesn’t exist, but then can see the error and fixes it in the next query"

The implications of this are significant.

Error messages are the new prompts: the AI takes its cues from error messages and adapts its approach to solving the problem at hand.

“Error messages are a great example of how our tools shape the way we think.” - Douglas Crockford

Just replace “we” in the quote above with “AIs”.

Error messages as prompts are neat and should work well where error messages are helpful. Unfortunately, that discounts a lot of software and puts a natural gate on use cases.

As these limitations become more apparent, more tooling will emerge to connect an AI to a debugger to gain complete insight and control over the target software. This will significantly reduce the time required for learning when AI operates and monitors software in real time.

The future for security test coverage and automation looks bright. Non-trivial adversarial security testing involves identifying and exploiting many obscure edge cases. As any decent penetration tester will tell you, this is time-consuming and frustrating.

To achieve a degree of human-driven automation, we use domain-specific tooling (e.g. Burp Suite for web app testing). The next step will be programming adaptive adversarial AI Agents to accelerate the boring bits of security testing.

The rise of AI agents only increases the need for guardrails and human oversight/intervention, much like how having reliable brakes on your car enables you to drive faster.