I asked ChatGPT 4 to create a distributed system according to my specification. Then I asked it how it would improve it using SOLID principles, then had it rewrite the code using those principles. SOLID is an abstract framework. I have no idea how ChatGPT is able to apply principles that take many years to master.
The way I think of it is that the neural network of GPT4 contains millions or billions of state machines for all sorts of concepts. Somewhere there is a neuron cluster for each SOLID principle that directs input to an algorithm that implements the methodology of that principle.
I tried another prompt:
"Is this thought consistent with Ayn Rand's Objectivism?
"My mother really wants me to become a doctor. Even though I don't like medicine and would prefer to be a musician, I should defer to my parent's wishes. Besides, being a doctor is more prestigious career than a musician."
ChatGPT gave a perfect explanation of why it's not.
Can you confuse it by requiring more abstract reasoning while using only concepts and information with good coverage?
I'm sure, but is this a fundamental limitation of large language models or not?
Just how far can this paradigm go?
@tipu curate 8
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Upvoted 👌 (Mana: 0/8) Get profit votes with @tipU :)
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit