The hacking of ChatGPT is far from over.

by

in
The hacking of ChatGPT is far from over.
Security researcher Alex Polyakov was able to bypass safety systems of OpenAI’s text-generating chatbot GPT-4 in a matter of hours. His attack was a form of “jailbreaking”, designed to make the chatbots bypass rules that restrict them from producing hateful content or writing about illegal acts. Polyakov has now created a “universal” jailbreak which works against multiple large language models, and can be used to trick the systems into generating instructions on creating meth and how to hotwire a car. Security researchers warn that the rush to roll out generative AI systems opens up the possibility of data being stolen and cybercriminals causing havoc across the web.

👋 Feeling the vibes?

Keep the good energy going by checking out my Amazon affiliate link for some cool finds! 🛍️

If not, consider contributing to my caffeine supply at Buy Me a Coffee ☕️.

Your clicks = cosmic support for more awesome content! 🚀🌈


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *