Use ChatGPT to examine every npm and PyPI package for security issues
In just two days, Socket was able to identify and confirm 227 packages that were either vulnerable or contained malware within popular software package repositories:
Socket is now utilizing AI-driven source code analysis with ChatGPT to examine every npm and PyPI package. When a potential issue is detected in a package, we flag it for review and request ChatGPT to summarize its findings. As with all AI-based tools, there may be false positives, and we will not enable this as a default, blocking issue until more feedback is gathered.
One of the core tenets of Socket is to allow developers to make their own judgments about risk so that we do not impede their work. Forcing a developer to analyze every install script, which could cross into different programming languages and even environments, is a lot to ask of a developer–especially if it turns out to be a false positive. AI analysis can assist with this manual audit process. When Socket identifies an issue, such as an install script, we also show the open-source code to ChatGPT to assess the risk. This can significantly speed up determining if something truly is an issue by having an extra reviewer, in this case, ChatGPT, already having done some preliminary work.
What classes of vulnerability did they find? Information exfiltration, injection vulnerabilities, exposed credentials, potential vulnerabilities, backdoors and prompt poisoning…
Related Posts
-
Obi-ChatGPT - You’re My Only Hope!
Funny Jailbreak of the Week
-
Learn how hackers bypass GPT-4 controls with the first jailbreak
Can an AI be kept in its box?
-
Debunking the risk of GPU Card theft
Debunking AI Model Theft Myths: Understanding Confidential Computing & Security Engineering in Modern GPUs.