Supermathematics and Artificial General Intelligence

in artificial-intelligence •  7 years ago  (edited)

This thread concerns attempts to construct artificial general intelligence, which I often underline may likely be mankind's last invention.

I clearly unravel how I came to invent the supermanifold hypothesis in deep learning, (a component in another description called 'thought curvature') in relation to quantum computation.

I am asking anybody that knows supermathematics and machine learning to pitch in the discussion below.

Part A - Babies know physics, plus they learn


Back in 2016, I read somewhere that babies know some physics intuitively.
Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.

Part B - Algorithms for reinforcement learning and physics


Now, I knew beforehand of two types of major deep learning models, that:

(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)

However:

(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned.
(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.

Part C - I sought to create...


As a result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.

I didn't stop my scientific thinking at manifold learning though.

Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.

Part D - What I did to approach the problem...


As a result I created something called the supermanifold hypothesis in deep learning. (A part of a system called 'thought curvature').

This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.

Given that the supermanifold hypothesis compounds ϕ(x,θ,)Tw, here is an annotation of the hypothesis:

  1. Deep Learning entails ϕ(x;θ)Tw, that denotes the input space x, and learnt representations θ.
  2. Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates.
  3. Supermathematics entails (x,θ,), that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via θ,alt text
  4. As such, the aforesaid (x,θ,), is subject to coordinate transformation.
  5. Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science, within the generalizable nature of euclidean space, reasonably effectuate ϕ(x,θ,)Tw.

Part E - A probable experiment: A Transverse Field Ising Spin (Super)–Hamiltonian Quantum Computation

alt text

Article Download


Here is this article in pdf form.

Questions

Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?

If so is it feasible to pursue the model I present in the model?

And if so, apart from the ones discussed in the paper, what type of pˆdata (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://atheistforums.org/post-1614538.html