https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Been thinking a lot about AI lately. This Noam Chomsky article is interesting. I have a great deal of respect for the man, but I think he misses the mark pretty badly.
In particular, he uses the example of how a child, with no actual instruction, intuitively develops and understanding of grammatical rules. And contrasts that with ChatGPT being utterly different and comically shallow.
Yet from what I understand of deep neural nets, especially ones like AlphaGO, the child learning from nothing model seems very, very similar to what the AIs are doing. Being exposed to a broad set of examples, it infers what things are right in a given situation, but can't readily explain how it knows.
I read a metaphor recently that rings true. The discussions about whether AIs are now or will soon be "thinking" is very similar to discussions about whether a submarine can swim. The answer depends on how you define swimming, but the whole question is mostly a rhetorical distraction. The real issue is if it can accomplish the things that we do through swimming, and the answer for submarines is yes, and much better/longer/deeper/faster. Whether it's swimming or not isn't really the relevant.
Likewise, I literally don't care if the AIs are "thinking". I care if they're coming up with answer (and questions!) that are insightful and clever, and this long-term skeptic is quickly being won over to the yes side.