Machine Learning
Machine learning is a group of models or algorithms that learn from data, i.e getting some “experience” from going through data. In these models learning can be in many forms. For example, in linear regression learning can be seen as fitting an N-dimensional line to the data, and then doing clustering or prediction from that line (or weights for particular attribute). In Simple Vector Machines(SVMs), on a higher level learning is similar but after translating the data in to other representations by applying kernels. K-Nearest Neighbors learn by geometrically clustering the data points in a high dimensional data. and so on. Even though the results by these models feel like the results of natural learning in our brains, the learning dynamics are much different.
But there is a field in machine learning called Deep learning which started as an artificial implementation of neurons and layers of neurons present in our brains. we have billions of neurons in our brain which are highly interconnected (hopefully in layers) through synapses which have different weights (means strength, impact of that connection). In technical terms the process of learning is the gradual change of the values of that weights which are associated with trillions of connections between billions of neurons.
The process of learning in deep leaning may be comparable to the process of natural learning in the brain. But the subtle differences between these processes might decide whether we can build a machine just as smart as we are, which is a question that many great people are after, through last few centuries.
The differences are these:
- Our brain does a very good job of changing the above said weights of those connections and enable effective learning. Most of the Deep learning models use an algorithm called Back propagation to change those weights to enable learning. we don’t know confident enough that brains actually implement ‘back propagation’ and thus we are uncertain that it has the potential to make deep learning models as smart as we are.
- With some exceptions we use deep learning models which consists of neurons in successive layers ( so got the name ‘deep’ learning) but neglect the potential connections between the neurons in the same layer and other combinations. Thus until learning is enabled only for the models of neural networks which are structured in layers, we might miss the chance to make machines smarter to the very extent.
- In some cases learning dynamics might be completely different because, as human brain has many connections by manipulating weights from value zero to max value, it can exhibit the dynamic nature in connections which might be understood as adding new connections and dropping old connections in real time.
In many ways we can compare existing Deep learning models to the learning models in brain. But because we know the least about the learning dynamics and networks of neurons in our brains, we might be missing the most important part that contributes towards intelligence, consciousness, memory and so on. Lot of great people of Deep learning and neuroscience are working to actually know the aspects of learning and intelligence in our brains, and the ways to engineer the same in a machine.
Photo by Alvaro Reyes on Unsplash
😦
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit