![]() However, scientists are divided on the question of whether it is even feasible for an AI system to be able to achieve these characteristics. In order for an AI to truly be sentient, it would need to be able to think, perceive and feel, rather than simply use language in a highly natural way. Our normalcy bias tells us that only other sentient human beings are able to be this "articulate." Thus, when witnessing this level of articulateness from an AI, it is normal to feel that it must surely be sentient. The fluidity stands in stark contrast to the awkward and clunky AI chatbots of the past that often resulted in frustrating or unintentionally funny "conversations," and perhaps it was this contrast that impressed people so much, understandably. LaMDA is a language model-based chat agent designed to generate fluid sentences and conversations that look and sound completely natural. This is, of course, the million dollar question – to which there is currently no answer. That is to say, despite the surge of excitement and speculation on social media and in the media in general, and despite the engineer's claims, LaMDA is not sentient. In this analogy, Google is the illusionist, and its LaMDA chatbot – which made headlines a few weeks ago after a top engineer claimed the conversational AI had achieved sentience – is the illusion. If this were not the case, it would not be an illusion, and the illusionist would essentially be without a job. As any great illusionist will tell you, the whole point of a staged illusion is to look utterly convincing, to make whatever is happening on stage seem so thoroughly real that the average audience member would have no way of figuring out how the illusion works. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |