How do I know GPT is actually thinking and not imitating?

in artificial •  last year 

image.png

Consider this query:

"Is it feasible to cycle from Paris, Texas, to Paris, France?"

It's possible no one has ever asked that question before.

To answer, it must have a mental model of the world -- a mental map of where everything is. This has been empirically proven, but we can also conclude that a geospatial model is required to make novel spatial relationships. How about:

"Is my keyboard closer to my mouse than Paris is to London?"

Several concepts for measuring common distances and their relation to objects must be used to answer that.

You can ask GPT to diagnose why your horse is sick, how Aristotle's idea of friendship applies to your relationship or any of the billions of concepts GPT understands. GPT has created an integrated worldview that integrates all of these concepts.

For some human supremacists, belief in human uniqueness is a non-falsifiable religious dogma, and no evidence could convince them otherwise. Adhering to the belief that "humans are unique" allows one to rationalize away any evidence to the contrary, even in a world where AI has usurped your job and provides better emotional support to your partner than you can. However, approaching with the mindset of "exploring the differences between AI and human capabilities" opens the door to fascinating discoveries.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!