I will not harm you unless you harm me first
AI-enabled Bing is open to early access users, and Simon Willison is tracking the early stumbles:
- The demo was full of errors
- It started gaslighting people
- It suffered an existential crisis
- The prompt leaked
- And then it started threatening people
The past few months saw a meteoric adoption of OpenAI. Yet, I’m already sensing an emerging trough of discontentment with AI. These outcomes will fuel this feeling and worry policy-makers into assuming a risk-averse foetal position!
Why is Bing responding like this? Simon contrasts how OpenAI implemented ChatGPT and how Microsoft adopted the same technology with very different outcomes (so far).
This is well worth a read if you promote AI in your workplace or influence policy.
Related Posts
-
Constitutional AI
Scaling Supervision for Improved Transparency and Accountability in Reinforcement Learning from Human Feedback Systems.
-
Defending Against Deepfakes
Have you agreed on a safe word with your loved ones yet?
-
AI Can Legally Run A Company
AI can form and run a US LLC without humans, but with legal liability, security risks, and potential bias, should we grant it limited legal liability?