THREAT PROMPT

Explores AI Security, Risk and Cyber

"Just wanted to say I absolutely love Threat Prompt — thanks so much!"

- Maggie

"I'm a big fan of Craig's newsletter, it's one of the most interesting and helpful newsletters in the space."

"Great advice Craig - as always!"

- Ian

Get Daily AI Cybersecurity Tips

  • Intelligent Data Validation - the Easy Way

    Struggling with data validation in your code?

    Let's talk about a game-changer in data validation: LLM powered validators.

    Python Instructor's unique feature offers
    software developers an accessible and adaptable method to define data
    validations for non-deterministic LLM responses.

    Simply put, you challenge the LLM to correct errors until it gets it right,
    exhausts max retries, you get bored, or run out of API credit.

    Take email verification...

    The classic method: a hair-raising, difficult-to-debug regular expression to
    validate email address structure, format, and content.

    The LLM powered way? Just prompt it: "Validate that the email is in a correct
    format and looks legitimate. Consider domain reputation and common typos."

    LLM validators make it easy to auto-prompt LLMs to enforce data rules.

    This method is highly effective, especially when combined with non-LLM
    validators: the LLM handles the clever checks, while a deterministic check
    validates the LLM's structured output.

    Have you tried using LLM validators in your AI projects? What challenges or
    benefits did you face?

  • Is Your AI Pair Programming Session Going Off the Rails?

    Ever find yourself in an AI pair programming rollercoaster, oscillating between "Wow, what a timesaver!" and "What the hell is happening here?"

    The key is managing flow, your attention, and steering the AI. Here are some tips to keep your AI coding sessions on track:

    1. Mind your mental fuel: How much brain power do you have left? Letting your mind wander can lead to a doom loop of pointless auto-commits. Stay engaged to catch missteps early.

    2. Choose tools with easy rollbacks: Select AI assistants that let you quickly undo changes. Aider's /undo command has saved me countless times!

    3. Spot and break unproductive patterns: Recognize when you're going in circles. Sometimes the AI is working from outdated docs, other times it's making flawed assumptions. Learn to course-correct or just fix the underlying issues yourself.

    4. Split ideation from coding: Use chat interfaces for bouncing ideas and getting code sketches. But for serious implementation, switch to the right tool for the job.

    5. Verify AI-provided information: Double-check critical details, especially if something seems off.

    Remember, you're the pilot, and AI is your co-pilot. Stay alert, steer wisely, and enjoy the productivity boost without falling into the LLM-ache trap.

    What's your strategy for keeping AI assistance on track? How do you get back in the flow when things go sideways?

  • LLM Deployment Matrix v1

    Planning an AI-powered project? Your deployment choices just got more interesting.

    It's easy to assume LLM models are either fully cloud-hosted or on-premises.

    But there's a spectrum of options that could give you the best of both worlds.

    I've put together a basic LLM deployment matrix that breaks down key factors across five deployment models:

    1. Shared, Remotely Hosted

    2. Dedicated, Remotely Hosted

    3. Hybrid (Local Inference, Cloud Model)

    4. Locally Hosted

    5. On-Premise Managed Services

    The matrix covers dimensions like privacy, cost, performance, control, and scalability. It's a starting point to help you navigate the trade-offs and find the sweet spot for your specific needs.

    For instance, did you know that hybrid models can offer high privacy and performance with variable costs? Or that dedicated remote hosting can provide a balance of control and scalability?

    This isn't just about security - it's about optimizing your AI operations for your unique context.

    What factors are most critical for your AI projects? How might this matrix weigh on your deployment decisions?

  • Reverse the Hearse

    Ever feel like your AI chat is spiraling into nonsense?

    I do… regularly!

    I was deep in a coding session yesterday when it started spewing gibberish.

    Frustrating, right? But here's the thing - you don't have to start over from scratch.

    Most AI interfaces offer ways to course-correct mid-conversation:

    • Look for a pencil icon under your messages. Clicking it lets you edit and redirect.
    • Using a pair programming tool like Aider? Try the "/undo" command to rewind.
    • Some advanced UIs allow you to "fork" conversations, creating a safe branch to explore.

    Think of it like an "undo" button for your AI interactions. By spotting and using these features, you can keep your AI conversations productive and on-track.

    Next time you hit an AI dead-end, try backing up instead of starting over. You might be surprised how quickly you can get back on course.

    What's your go-to tactic when an AI conversation goes sideways?

  • PubCrawl with Large Language Momentum

    How do you quickly inspect a JavaScript-heavy website to resolve a security related question?

    I tend to break open Burp Suite, or dive into my browsers DevTools.

    Sometimes, I'll use one of those shady-looking websites that catalog or query the specific thing I want to discover.

    For something more automated, I'll write a 10-line script to scrape a specific website using headless browser automation.

    These tactics all do the job but bring their own friction.

    What if I told you that with Large Language Momentum, you could quickly create a more capable, flexible tool?

    That's exactly what I experienced recently when developing PubCrawl.

    Instead of cobbling together another one-off script, I used AI assistance (Claude and Aider) to build a one-shot web scraping tool in about an hour.

    The result?

    A focused, simple scraper that adheres to the Unix philosophy of doing one thing well and playing nicely with other tools.

    PubCrawl shines where curl falls short - on JavaScript-heavy websites. No more wrestling with DevTools or setting up inspection proxies. It uses Playwright to fully render pages, outputting everything in clean JSON that's ready for piping into other tools jq.

    Key features:

    1. Handles JavaScript-rendered content effortlessly

    2. Fine-grained control over the response URLs to capture and content types to scrape

    3. JSON output for easy integration with other tools

    4. Designed for simplicity and reusability

    Instead of accumulating a drawer full of single-use scripts, you can quickly develop more robust tools that adapt to various scenarios. It's particularly useful for cybersecurity tasks like reconnaissance, compliance checks, or threat intelligence gathering.

    The real game-changer is how leading-edge generative LLMs lower the bar for creating reusable tools. Even as a casual Python programmer, I could build something far more capable than my usual quick scripts.

    How might Large Language Momentum change your appetite for toolkit development?

    Can you think of any one-off scripts you've written that could evolve into more versatile tools with AI assistance?

  • 7 Critical Factors in the AI-AppSec Risk Equation

    At a recent OWASP event, I shared key factors I consider before integrating Large Language Models (LLMs) into the software development lifecycle.

    Here's a quick rundown:

    1. Risk Management: Avoid LLMs if the downside risk can't be easily recovered from.

    2. LLM Model Choice: Select models based on your target domain and whether you need narrow or wide-ranging capabilities.

    3. Input/Output Nature: Recognize that LLMs excel at human language tasks, especially converting intent to structured outputs. For code generation, know your chosen model's sweet spot.

    4. Integration Approach: Weigh using live LLM inference in your application against using LLMs to build tools that your code can use in a workflow (possibly after intent decoding by an LLM).

    5. Implementation: Plan for guardrails and manage code complexity.

    6. Expertise: Evaluate both the LLM operator's and the model's expertise.

    7. Deployment: Consider model hosting options and inference costs.

    Each of these factors can significantly impact the security and effectiveness of your AI integration. I'll be diving deeper into these in future tips, as some can be quite nuanced.

    LLMs can be powerful tools, but like any tool, they need to be wielded with care and understanding.

    Which of these factors do you find most challenging when integrating LLMs into your security workflows?

Page 1 of 18

Get Daily AI Cybersecurity Tips