All Stories
March 2023
-
Self-supervised training; a singularity without warning?
Can an AI hide if its goals or objectives are not correctly aligned with those of its human designers or users (misalignment)?
-
Novel Prompt Injection Threats to Application-Integrated Large Language Models
Expanding AI Threat Landscape: Untrusted Data Injection Attacks on Application-Integrated LLMs.
-
Meta LLaMA leaked: Private AI for the masses
AI Governance Dilemma: Leaked Llama Model Outperforms GPT-3! Explore the debate on trust, policy, and control as cutting-edge AI slips into public domain.
-
Adversarial Threat Landscape for Artificial-Intelligence Systems
If your organisation undertakes adversarial simulations, learn about ATLAS
-
Upgrade your Unit Testing with ChatGPT
Companies with proprietary source code can use public AI to generate regular and adversarial unit tests without disclosing their complete source code to said AI.
-
Backdoor Attack on Deep Learning Models in Mobile Apps
This MITRE ATLAS case study helps bring to life the framework
-
AI-powered building security, minus bias and privacy pitfalls?
Facial recognition has lodged itself in people’s minds as the defacto technology for visual surveillance, and we should all find that quite disturbing!
-
Do you want to star in co-appearance?
“Co-appearance” sounds like a movie credit, but, in this case, you might not have signed up for the role.
-
Does AI need Hallucination Traps?
6 million people viewed post, but only one report an error by the AI
-
Companies blocking ChatGPT
Enterprise companies are reportedly restricting their employees from using ChatGPT due to security and privacy concerns.
Page 7 of 11