Flow Engineering for High Assurance Code

In this 5-minute video, @tamar Friedman shares how open-source AI coding champ AlphaCodium brings back the Adversarial concept found in GAN (Generative Adversarial Networks) to produce high-integrity code.

Primarily conceived by Tal Ridnik, this software demonstrates Flow Engineering - "a multi-stage, code-oriented iterative flow" style of LLM prompting - that beats the majority of human coders in coding competitions.

The proposed flow consistently and significantly improves results. On the validation set, for example, GPT-4 accuracy (pass@5) increased from 19% with a single well-designed direct prompt to 44% with the AlphaCodium flow. Many of the principles and best practices acquired in this work, we believe, are broadly applicable to general code generation tasks.

With the Transformer architecture, Generative AI improved so much that the adversarial component of GAN was no longer needed. AlphaCodium returns the adversarial concept to "check and challenge" generative outputs; subjecting them to code tests, reflection, and matching against requirements.

If you've read between the lines, you'll recognise a familiar pattern: to improve the quality of generative outputs call back into the LLM (popularised in many ways by LangChain).

But how you do this is key to correctness and practical feasibility; AlphaCodium averages 15-20 LLM calls per code challenge, four orders of magnitude fewer than DeepMind AlphaCode (and generalises solutions better than the recently announced AlphaCode 2).

This is obviously important for software security. But two of the six best practices the team shared are also relevant to decision-making systems for AI security, like access control.

Given a generated output, ask the model to re-generate the same output but correct it if needed

This flow engineering approach means additional LLM roundtrips, but consistently boosts accuracy.

If you've used the OpenAI playground, or coded against completion endpoints, you may recall the "best of" parameter:

Generates multiple completions server-side, and displays only the best. Streaming only works when set to 1. Since it acts as a multiplier on the number of completions, this parameters can eat into your token quota very quickly - use caution!

With best of, the LLM generates multiple outputs server-side ($$$) and chooses a winner which it returns to the caller.

With flow engineering a single output is generated and fed back into the LLM with a prompt designed to influence the set of possible completions towards improving the code.

The other best practice to highlight:

Avoid irreversible decisions and leave room for exploration with different solutions

Good AI security system design recognises that some decisions carry more weight and reversibility than others (similar to behavioural system design).

Think of AI as a risk tool rather than a security guard.

Its job is to provide a soft decision, and your job is to establish risk boundaries in light of experience beyond which human decision-making is appropriate, or even necessary.

Perhaps in the future, step-up risk decisions will require less human input. Instead, a more sophisticated and expensive LLM constellation might be used to ensure a quorum, possibly augmented by an adversarial engine to proactively challenge the user, but in an accessible friendly way (without being too open to subversion!).

Related Posts

Get Daily AI Cybersecurity Tips