Google AI wasn’t taught in its tests yet it learnt new language on its own

One of Google’s AI programmes could respond to a query in Bangla, even though it was not programmed to do so. The company’s CEO Sundar Pichai, in an interview, talked about how they don’t yet fully understand AI and what it is capable of doing.

Eta ki? Freaky Google AI wasn't taught Bangla in its tests yet it learnt  new language on its own - India Today

By Divyanshi Sharma: With the popularity of AI chatbots like Google’s Bard, OpenAI’s ChatGPT, and Microsoft’s Bing, a lot has been talked about generative AI in the last few months. While some are intrigued by the emerging technology and are doing their best to stay on top of it, others are more sceptical and are arguing about its possible downsides. Even Google CEO Sundar Pichai, in a recent interview, warned against AI turning harmful if not deployed properly. On the other hand, OpenAI CEO Sam Altman admitted in one of his previous interviews that he is sort of ‘scared’ of his own creation.

Scared or not, Altman unleashed ChatGPT on the world in November 2022 and the AI chatbot has come a long way since then. Following a similar pattern is Google’s Bard, which is currently accessible to selected users in US and UK. However, what if we tell you that one of Google’s AI programmes could respond to a query in Bangla, even though it was not programmed to do so? Yep, you read that right. Let’s talk about ’emergent properties’ and how one of Google’s AI programme taught itself the language even though it wasn’t trained to know it.

Google AI teaches itself Bangla

AI behaving in ways it isn’t supposed to or teaching things to itself is a concept that has been widely explored in popular fiction. But nobody really thought that in 2023, we would be hearing about AI displaying emergent behaviour, and doing things it wasn’t programmed to do. If you think that this is the beginning of a ‘Black Mirror’ episode, you are not alone.

Nevertheless, coming back to Google’s AI programme that taught itself a new language, the news was reported by CBS news recently. In a short video shared by the publication on Twitter, Google CEO Sundar Pichai can be seen talking about emergent properties of AI and how we must begin talking about it.

The video shows Google’s AI programme which responds to a query in Bangla, even though it is not supposed to know the language.

Sundar Pichai can be seen describing an aspect of AI, called as ‘Black Box’, in the video. He says, “There is an aspect of this which all of us in the field call as ‘black box’. You don’t fully understand, and you can’t tell why it said this, or got wrong. We have some ideas and our ability to understand this gets better overtime. But that’s where the state of the art is.”

When the interviewer asks Pichai about not fully understanding how AI works yet unleashing it on the society, the Google CEO says in response, “Let me put it this way. I don’t think we fully understand how the human mind works either.”

In the same video, Pichai is asked about a particular short story written by Google’s Bard that seemed to be ‘disarmingly human’.

“It talked about the pain humans feel, it talked about redemption, how did it to all of those things if it’s just trying to figure out what the next right word is,” the interviewer asks.

Sundar Pichai responds to the question and says, “There are two views of this. One set of people think that these are just algorithms are repeating what they’ve seen online. Then there is a view where these algorithms are showing emergent properties- to be creative, reason, plan etc. And personally I think we need to approach this with humility. It’s good that these technologies are getting out so that society can process what’s happening, and we (can) begin this debate. And I think it’s important to do that.”

Google engineer who claimed AI was sentient

In June last year, Google had suspended an engineer who had claimed that an AI chatbot developed by the tech giant had gone ‘sentient’. It is to be noted that in June 2022, the world didn’t know about ChatGPT and other AI tools hadn’t been released. However, the Google engineer’s claims of AI being sentient had made headlines and his suspension raised eyebrows. The engineer’s name was Blake Lemoine and he had also claimed that the AI chatbot (developed by Google) ‘thinks and responds like a human being’.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine had said in conversation with the Washington Post at the time.

Following his claims, Lemoine was suspended by Google and was placed on paid leave. Eventually, he was fired from the company. The tech giant alleged that Lemoine breached their confidentiality policy and rejected his statements.


Leave a Comment

Your email address will not be published. Required fields are marked *