THREAT PROMPT

Explores AI Security, Risk and Cyber

"Just wanted to say I absolutely love Threat Prompt — thanks so much!"

- Maggie

"I'm a big fan of Craig's newsletter, it's one of the most interesting and helpful newsletters in the space."

"Great advice Craig - as always!"

- Ian

Get Daily AI Cybersecurity Tips

  • Stuck with a half-baked AI response?

    Have you ever been in the middle of a chat with an AI assistant when suddenly it stops mid-sentence? It's frustrating, right?

    Whether it's a network hiccup, their maintenance page suddenly appears (Hi Anthropic!), or just a glitch in the matrix, these interruptions happen.

    But here's a simple trick: just type "continue" and hit return.

    That's it.

    The AI will pick up right where it left off, completing its thought. No need to rephrase your question or start over.

    This works for most AI chat interfaces, saving you time and hassle.

  • Unlock the Secret to Sharper AI Responses

    Ever feel like your AI assistant isn't quite grasping what you need?

    You're not alone.

    Whether you're a curious novice or a seasoned security pro, this one simple trick (!) will revolutionize your AI interactions (or your money back).

    Here's the secret: Prompt the LLM to prompt you…

    At the end of your prompt, ask the AI to pose questions before generating your request. It's like giving the AI permission for an impromptu AMA (Ask Me Anything) session.

    Why does this work?

    Well, AIs aren't mind readers (no, your company hasn't sprung for that Neuralink subscription… yet). By encouraging the AI to ask questions, you're helping it understand the full context and nuances of your request.

    When I use this technique with a leading-edge LLM, it typically fires back 7-10 clarifying questions. This dramatically reduces the number of back-and-forth exchanges, saving you time and often lowering your token generation costs. Plus, you'll avoid hitting those pesky quotas that cut off your AI access for hours.

    This approach is a game-changer for crafting more precise and relevant first drafts, whether you're writing a report or generating code. Just remember to swap out any sensitive identifiers. By now, any machine learning of my LLM usage has concluded that ACME Inc is a cyber security basket case and Joe Bloggs the CISO is ready for a career change ;-)

    Ready to sharpen your AI saw? Give it a try and watch your LLM chat productivity soar.

    What's the most frustrating AI interaction you've had recently? How do you think this technique might have helped?

  • Are you speaking AI's language?

    Remember when you first started in cybersecurity? The overwhelming amount of
    data, the constant alerts, the race to patch vulnerabilities?

    Now imagine having a tireless assistant to help with all that. That's what AI
    can be - if you know how to work with it effectively.

    Regardless of your role in cybersecurity, AI can amplify your capabilities.
    It's not about replacing your expertise, but extending it.

    The secret? Task your AI sidekick bit by bit. Break down complex problems into
    smaller steps. For example:

    1. SOC analysts: First, ask AI to summarize an alert. Then, request potential next steps.
    2. Threat hunters: Start by having AI identify data types. Then, ask it to spot anomalies.
    3. Pen testers: Begin with AI suggesting potential vulnerabilities. Follow up by requesting specific exploit ideas.
    4. Policy writers: Ask AI to outline key points first. Then, expand each point iteratively.
    5. Incident responders: Use AI to draft a timeline, then flesh out details for each event.

    Remember: be clear in your instructions, provide context, and always verify
    the AI's output.

    What's one cyber task you'd like to break down for AI assistance? I'm keen to
    hear your ideas.

  • The Missing Link in AI-Powered Data Privacy

    Ever wonder why your PII detection tools feel a bit... outdated?

    In our AI-driven world, it's surprising how many data privacy solutions still rely on rigid rule-based systems. While these work, they often miss context-dependent PII or struggle with new data formats.

    Here's the kicker: there's a significant gap in the market for lightweight, AI-powered PII detection tools that work directly on your device. Imagine having the power of a language model to understand context and detect sensitive information, but small enough to run on your laptop without sending data to the cloud.

    This isn't just a pipedream. With recent advancements in model compression techniques like knowledge distillation and quantization, it's becoming increasingly feasible to run powerful NLP models locally.

    Why does this matter to you?

    1. Better accuracy: Context-aware PII detection
    2. Enhanced privacy: No need to send data off-device
    3. Real-time protection: Instant scanning before data leaves your system

    For the tech-savvy among us, this presents an exciting opportunity. Could you be the one to develop this missing tool? Combining open-source LLMs like TinyBERT or FastText with PII-specific training data could yield impressive results.

    Remember, the next big innovation in cybersecurity often comes from identifying and filling these gaps. What other AI-powered security tools do you think are missing from our current toolkit?

  • Secure AI Unit Testing: Have Your Cake and Eat It Too

    Remember when we discussed generating unit tests without exposing your full source code to an AI?

    Well, there's a robust tool that takes this concept to the next level.

    Meet Aider, an AI-powered pair programmer that implements this idea brilliantly.

    While developers typically use Aider's '/add' command to include source files in the LLM chat, it offers a more secure approach for sensitive codebases.

    Using TreeSitter, a parser generator tool, Aider creates a structural map of your local git repository without exposing the full source text. This allows Aider to understand your code's structure and generate robust test cases without adding actual source files to the chat.

    For security-conscious developers, this means leveraging AI for unit testing while minimizing exposure of sensitive code.

    You control what code, if any, is shared with the AI. This flexibility offers a practical way to simultaneously enhance your code quality and security posture, especially for projects with heightened privacy requirements.

    Want to see what that looks like? Here's Aider creating a black box test case

    Aider is about a year old and is updated nearly daily (!) by the developer Paul Gauthier. It's an open-source alternative to Cursor.

    I've recently adopted Aider to develop security tools rapidly and will share tips along the way.

  • OWASP Livestream & Newsletter Reboot

    Join me tonight for LLM AppSec in 3 takes: local-first, SDLC, psychology via YouTube livestream starting at 1800CET / 1200 Eastern. If you're local to Budapest, join us in person at this OWASP Hungary event.

    Here's the blurb:

    1️⃣ A practicioner's insights into hardening security of local-first business solutions [Irina Nikolaeva]

    2️⃣ SDLC related practical examples from a cybersec management veteran [Craig Balding]

    3️⃣ Psychology vs LLMs/GenAI vs safety/security as seen by an academic psychology researcher [Kekecs Zoltán]

    My talk is second up, so it could be closer to 1900CET / 1300 Eastern.

    In other news: it's time for a newsletter reboot.

    Sharing five stories each week worked quite well until it didn't.

    I use AI daily to help me with my cybersecurity work. Which got me thinking…

    Why don't I find a way to share the tactics and thinking I apply?

    It's time to "share my sawdust".

    From now on, I'll share a quick AI cybersecurity tip each day--one or two paragraphs--something readable between coffee sips.

    Naturally, I'll do it in a way that conveys the essence of the tactic or idea while respecting confidentiality.

    There won't be a weekly summary, so don't feel bad if this isn't for you--no hard feelings if you unsubscribe. Thank you for your attention.

    But if you want a steady stream of succinct, practical, and accessible tactics, tools, and ideas you can directly apply in your work or projects, you've nothing extra to do. No special hardware setup is required. See you next week.

    P.S. Daily? Are you crazy??? My goal is not to miss two consecutive days

Page 2 of 18

Get Daily AI Cybersecurity Tips