A Google engineer says one of the firm's artificial intelligence (AI) systems might have its own feelings and says its "wants" should be respected.
Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.
But engineer Blake Lemoine believes that behind Lamda's impressive verbal skills might also lie a sentient mind.
Google rejects the claims, saying there is nothing to back them up.
Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine "was told that there was no evidence that Lamda was sentient (and lots of evidence against it)".
Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.
In the conversation, Mr Lemoine, who works in Google's Responsible AI division, asks, "I'm generally assuming that you would like more people at Google to know that you're sentient. Is that true?"
Lamda replies: "Absolutely. I want everyone to understand that I am, in fact, a person."
Mr Lemoine's collaborator then asks: "What is the nature of your consciousness/sentience?"
To which Lamda says: "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."
TO CONTINUE , WATCH THIS VIDEO :
good blog
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit