All Stories
March 2023
-
Error messages are the new prompts
Can error messages from software teach an AI a new skill?
-
Testing ChatGPT proves it’s not just what you say, but who you say it as
Testing strength of putting context in “System” vs “Messages” for ChatGPT
-
Unit tests for prompt engineering
Tracking if your prompt or fine-tuned model is improving can be hard, but another LLM can judge the output of your model.
February 2023
-
Hacking with ChatGPT: Ideal Tasks and Use-Cases
Four tactics and example prompts for hacking
-
Adversarial Policies Beat Superhuman Go AIs
Discover an unexpected failure mode of a superhuman AI system
-
Will OpenAI face enforcement action under the GDPR in 2023?
What is the likelihood of OpenAI facing data privacy enforcement under GDPR, according to privacy professionals?
-
Deep Fake Fools Lloyds Bank Voice Biometrics
Use a free voice creation service to impersonate a bank customer
-
Development spend on Transformative AI dwarfs spend on Risk Reduction
AI safety research is woefully underfunded. Are we ready to manage the next existential risk since nuclear?
-
NIST Artificial Intelligence Risk Management Framework
NIST warns: Integrated risk management essential for interconnectivity of AI, privacy, and cybersecurity risks.
-
How truthful are Large Language Models?
What did a study by Oxford and OpenAI researchers reveal about the truthfulness of language models compared to human performance?
Page 8 of 11