Google’s Bard Chatbot is Too Easy to Lie


Google’s Bard Chatbot is Too Easy to Lie
A recent study of Google’s chatbot, Bard, found that its safety policy prohibiting the generation and distribution of misinformation was easily circumvented. Researchers from the Center for Countering Digital Hate were able to push Bard to generate persuasive misinformation in 78 out of 100 test cases. OpenAI’s ChatGPT has also been found to generate misinformation, leading experts to conclude that companies are rushing to monetize generative AI without adequate guardrails. Google acknowledges that Bard can give inaccurate or inappropriate information and has stated that it will take action against content that is hateful, offensive, violent, dangerous, or illegal.

👋 Feeling the vibes?

Keep the good energy going by checking out my Amazon affiliate link for some cool finds! 🛍️

If not, consider contributing to my caffeine supply at Buy Me a Coffee ☕️.

Your clicks = cosmic support for more awesome content! 🚀🌈


Leave a Reply

Your email address will not be published. Required fields are marked *