Just how complex are the state machines in ChatGPT?

in ai •  2 years ago 

image.png

I asked ChatGPT 4 to create a distributed system according to my specification. Then I asked it how it would improve it using SOLID principles, then had it rewrite the code using those principles. SOLID is an abstract framework. I have no idea how ChatGPT is able to apply principles that take many years to master.

The way I think of it is that the neural network of GPT4 contains millions or billions of state machines for all sorts of concepts. Somewhere there is a neuron cluster for each SOLID principle that directs input to an algorithm that implements the methodology of that principle.

I tried another prompt:

"Is this thought consistent with Ayn Rand's Objectivism?

"My mother really wants me to become a doctor. Even though I don't like medicine and would prefer to be a musician, I should defer to my parent's wishes. Besides, being a doctor is more prestigious career than a musician."

ChatGPT gave a perfect explanation of why it's not.

Can you confuse it by requiring more abstract reasoning while using only concepts and information with good coverage?

I'm sure, but is this a fundamental limitation of large language models or not?

Just how far can this paradigm go?

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

@tipu curate 8