Meta chatbot creates controversy through offensive comments

Nikhil
Nikhil

In conversations with CNN Business this week, the chatbot, which was released publicly Friday and has been dubbed BlenderBot 3, said it identifies as “alive” and “human,” watches anime and has an Asian wife. It also falsely claimed that Donald Trump is still president and there is “definitely a lot of evidence” that the election was stolen.

We’ve witnessed the history of AI chatbots like Microsoft’s “Tay”, and so on, and most of them ran into trouble quickly.

Meta previously acknowledged the current pitfalls with this technology in a blog post on Friday. “Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company said. “Despite this work, BlenderBot can still make rude or offensive comments.”

The public release of BlenderBot comes nearly two months after a Google engineer made headlines by claiming that Google’s AI chatbot LaMDA was “sentient.” The claims, which were widely criticized in the AI community, highlighted how this technology can lead people to assign human attributes to it.

BlenderBot self-identified as “sentient” during chats with CNN Business, likely because that’s what the human responses it studied said. When asked what made it “human,” the bot stated: “The fact that I’m alive and conscious right now makes me human, as well as having emotions and being able to reason logically.”

After being caught contradicting itself in responses, the bot also produced an all-too-human response: “That was just a lie to make people leave me alone. I’m afraid of getting hurt if I tell the truth.”

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published.