Google’s AI spots a human cognitive glitch!!

Nikhil
Nikhil

Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decades-long program to build models that generate grammatical, meaningful language.

Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapples.”

Today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire Internet. Second, they can learn relationships between words that are far apart, not just words that are neighbours. Third, they are tuned by a huge number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.

When asked a question to GPT-3. a large language model, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savoury flavours of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.

Fluent language alone does not imply humanity Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published.