又看了一部分《千智之脑》这本专门讲脑科学的书。说实话,这本书看起来还是有些烧脑。但是如果联系到人工智能方面的算法来理解,还是挺有意思的。作者认为,人类大脑新皮质的功能,也就是人类大脑和其他动物最大的不同就是人脑具有新皮质,新皮质是进化过程中后来才出现的,也是人类和其他动物最大的区别。大部分动物都依靠本能生活,而这需要只有旧大脑的神经来控制就可以了。而新皮质功能又是什么?
作者的理论,新皮质的最重要的功能就是预测。在我们的日常生活中,几乎所有的动作,所有的事情都需要用到这个预测的功能,哪怕是一些非常寻常的动作。比如我们拿起茶杯喝水,触觉会将茶杯的温度和茶杯的质感传递到大脑当中。如果这些感官信号与大脑新皮质预测的相同,就会触发下一个动作——端起茶杯放,然后茶水的香味,或者茶杯圆形的轮廓接触到嘴唇的感觉又会发送到大脑。如果这些刺激信号仍然与新皮质预测,大脑就会发出命令,把水喝下去。如果在这一连串动作中,传回的信号与大脑新皮质所预测的不同,比如茶杯太烫,茶杯的把儿不在原来的地方了,这个时候大脑就会调动我们的注意力,观察到底发生了什么,然后更新的主观世界模型。比如当我发现想去拿茶杯的把儿,发现它不在原来的地方,这时我定睛一看,原来是茶杯把儿的地方,现在只剩下了摔断的缺口。这时我会知道,哦,原来这个茶杯摔破了,然后我会将头脑中的主观世界模型进行更新。我有一个茶杯是摔破了的。
这个过程如果你仔细思考的话,会发现它其实和人工智能的训练过程非常像。人工智能一般的训练过程就是先建立一个模型,模型中的参数可以随机设置。然后就将准备好的数据集喂给这个模型,数据集一般分为输入和结果两部分,我们将输入部分喂给给模型,然后模型根据它内部的参数进行计算,给出预测结果。当然一开始因为参数是随机初始化的,所以给出的预测结果肯定是乱七八糟。不过不要紧,我们将模型的预测结果与数据集中的正确结果进行比较,然后得出一个叫做损失函数的东西,损失函数的值越大,说明与模型预测的结果与现实差别也越大。这个损失函数,用科普视频里的话就是小红学习的时候心不在焉,做的题目和答案差的太多了,妈妈一生气就给了她一个大逼兜,然后小红就调整自己的计算方式,再重新认真做题。这个过程与我们刚开始所说的新皮质的工作原理几乎是一模一样的,损失函数就是用来提醒模型需要注意了,要修改自己的参数向正确结果靠近。
接下来作者就讨论了,在神经元工作原理的部分就很有趣了,虽然我们计算机的人工神经网络算法是在模仿神经元的工作方式。但人类神经元的工作原理还是有很大不同。经元细胞就像一个小章鱼,它有很多很短的触手。这些短触手上还有像树枝一样的分叉,这些短触手我们把它叫做树突,同时一个神经元还有一个很长的触手叫做轴突,轴突的另一端连接着其他神经元的树突,而短触手竖突则连接着其他神经元的树突。
接下来作者讲解了。他对。新皮质中的神经元细胞如何进行预测的理论?这一章非常有意思,因为他提到了我之前不知道的知识。也展现出来人脑神经元和人工智能算法所模拟的神经元之间的一项非常重要的不同。虽然大体上它们原理是相同的,一个神经元从其他神经元那里接收刺激信号。根据接收到的模式,来决定是否向次一级的神经元也发出信号。人工智能算法对真实的神经元的模拟也就到此为止了。但是在新皮质中的神经元。还有更多不为人知的部分——就是神经元细胞用来接受其他神经元刺激的短触手——树突的数量是非常多的。但是能够刺激神经元产生放电现象的树突,只占所有树突的10%。剩下的90%的树突,无论他们接受到什么样的刺激。都不会导致神经元放电。这就像虽然人类已经掌握了量子力学和相对论,但是对占宇宙70%的暗能量却一无所知一样。不过作者经过多年的研究。得出了答案,这90%的树突就是神经元用来进行模拟和预测的。他们虽然不能激发神经元的放电,但是如果他们收到的刺激符合某一特定的模式,他们可以把神经元置于一种准备起跑的状态。一旦实际的刺激与他们的预测相符,神经元就会马上放电。只有当实际刺激与预测不符时,才会交给意识处理,此时就会引起我们的注意。这也是我们能够不加思索地处理一些我们非常熟悉的事物的原因。
And read part of the "Brain of a thousand wisdom" this book specifically on brain science. To be honest, this book still looks a little brain-burning. But if it is linked to artificial intelligence algorithms to understand, it is still very interesting. The author believes that the function of the human brain neocortex, that is, the biggest difference between the human brain and other animals is that the human brain has a neocortex, which emerged later in the evolution process, and is also the biggest difference between humans and other animals. Most animals live by instinct, which requires only the nerves of the old brain to control. And what is the function of the neocortex?
The authors theorize that the most important function of the neocortex is prediction. In our daily life, almost all actions, all things need to use this predictive function, even some very ordinary actions. For example, when we pick up a teacup to drink water, the sense of touch will transmit the temperature and texture of the teacup to the brain. If these sensory signals match those predicted by the neocortex, the next action is triggered - the cup is picked up and placed, and then the smell of the tea, or the circular outline of the cup touching the lips, is sent back to the brain. If these stimulation signals are still predicted by the neocortex, the brain will issue a command to drink the water. If the signals sent back during this sequence of actions are different from those predicted by the neocortex, such as the teacup being too hot or the handle of the teacup not being in its original place, then the brain mobilises our attention to observe what is going on and then updates our subjective model of the world. For example, when I found that I wanted to take the handle of the teacup and found that it was not in the original place, then I looked closely and found that where the handle of the teacup was, there was only the broken gap left. That's when I know, oh, this teacup is broken, and I update the subjective world model in my head. One of my teacups is broken.
This process, if you think about it, is actually very similar to how artificial intelligence is trained. The general training process of artificial intelligence is to build a model first, and the parameters in the model can be randomly set. Then we feed the prepared data set to the model, the data set is generally divided into input and result parts, we feed the input part to the model, and then the model computes according to its internal parameters and gives the prediction result. Of course, in the beginning, because the parameters are randomly initialized, the prediction results are bound to be messy. It doesn't matter, though, that we compare the model's predictions to the correct results in the data set and come up with something called a loss function, and the higher the value of the loss function, the more different the results from the model's predictions and reality. This loss function, with the words in the popular science video is that Xiao Hong is absent-minded when learning, doing the problem and the answer is too bad, the mother was angry and gave her a big force pocket, and then Xiao Hong adjusted his calculation method, and then seriously do the problem again. This process is almost exactly the same as what we said about the neocortex at the beginning, and the loss function is used to alert the model that it needs to pay attention and modify its parameters to get closer to the correct result.
Next, the author discusses how neurons work, which is very interesting, although our computer's artificial neural network algorithm is imitating the way neurons work. But human neurons work very differently. The menstrual cell is like a small octopus, it has a lot of very short tentacles. These short tentacles also have branches like branches, these short tentacles we call dendrites, and a neuron has a very long tentacle called an axon, and the other end of the axon is connected to the dendrites of other neurons, and the short tentacle vertices are connected to the dendrites of other neurons.
Then the author explains. He's right. How do neurons in the neocortex make predictions? This chapter is very interesting because he brings up knowledge that I didn't know before. It also shows a very important difference between neurons in the human brain and those simulated by artificial intelligence algorithms. Although the principle is generally the same, one neuron receives stimulation from other neurons. Based on the pattern received, it is decided whether to send a signal to the next level of neurons. Artificial intelligence algorithms can only simulate real neurons so far. But the neurons in the neocortex. Even more unknown -- the short tentacles that neurons use to receive stimulation from other neurons -- dendrites are numerous. But dendrites, which stimulate neurons to fire, make up only 10 percent of all dendrites. The remaining 90% of the dendrites, no matter what kind of stimulus they received. They don't cause neurons to fire. It's like knowing quantum mechanics and relativity, but knowing nothing about dark energy, which makes up 70 percent of the universe. But the authors have done years of research. The answer is that 90% of the dendrites are what neurons use to simulate and predict. They can't fire neurons, but if the stimulation they receive fits a particular pattern, they can put them in a state of readiness to run. Once the actual stimulus matched their prediction, the neurons fired. It is only when the actual stimulus does not match the prediction that it is handed over to the conscious mind, at which point it will attract our attention. This is also the reason why we are able to deal with some of the things we are very familiar with without thinking about them.
互赞关注
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
OK
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Upvoted! Thank you for supporting witness @jswit.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit