There is research paper titled "Logical Induction" that suggests the following, which I think answers your topic partly. But, please correct me if I am wrong. I am keen to hear what you think about these conclusions:
"(1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference."
RE: Can Machines Ever Have Beliefs?
You are viewing a single comment's thread from:
Can Machines Ever Have Beliefs?
The paper looks interesting, I plan on giving it a closer look. Their methodology appears to rely on making decisions based on internal probabilities and humans perform similar behaviors. While this lacks conceptions of feeling based decisions, it does looks like a good framework for making rational decisions. However, the philosophical conception of belief and mathematical conception of belief might not be the same thing.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit