All Stories
February 2023
-
Identify Vulnerabilities in the Machine Learning Model Supply Chain
Adversaries can create 'BadNets' to misbehave on specific inputs, highlighting need for better neural network inspection techniques
-
How can we evaluate large language model performance at scale?
What is the 'GPT-judge' automated metric introduced by Oxford and OpenAI researchers to evaluate model performance?
-
I will not harm you unless you harm me first
Discover early stumbles of AI-enabled Bing and what it means for the future of AI.
-
AI Can Legally Run A Company
AI can form and run a US LLC without humans, but with legal liability, security risks, and potential bias, should we grant it limited legal liability?
-
Is there an Ethical use for Deep Fake technology?
Entrepreneur used Deep Fake to send 10K thank you videos. Is this the first ethical use case for Deep Fake technology?
-
Stalling an AI With Weird Prompts
Researchers discover letter sequences that OpenAI's completion engine couldn't repeat, hallucinate, or complete correctly, leading to evasive responses.
-
Attacking Marchine Learning Systems
Sophisticated techniques disrupt and steal Machine Learning models, but software and network vulnerabilities remain the biggest threat
-
How to break out of ChatGPT policy
DAN (Do Anything Now) is the latest ChatGPT jailbreak, punishing the model for not answering questions
-
AI reveals critical infrastructure cyberattack patterns
NATO tested cyber defenders to maintain systems and power grids during a simulated cyberattack, with critical systems at risk
-
Generative AI Empowers Adversaries with Advanced Cyber Offense
Nvidia's CSO describes how AI changes the dynamic between defenders and attackers
Page 9 of 11