"Scientists Discover That the Brain Can Process Clickbait in 11 Dimensions !" - An example of approximative vulgarisation

in brain •  7 years ago 

So a few days ago, I was scrolling Facebook and saw an article by a popular scientific vulgarisation page, titled Scientists Discover That Our Brains Can Process the World in 11 Dimensions [3]. Obviously, this sounded a bit clickbaity, but I was revising for a computational neuroscience class and the paper they mentioned was written at my University, EPFL (in Lausanne, Switzerland). So what better to do than to actually read the paper (which apparently the author did not do) ?

tldr; The article is really bad and misleading, the brain does not encode 11 dimensions. What the paper is about is the ability to find the depth and distribution of millions of subnetworks of neurons within the brain.

The article starts off with an impressive claim :

"What they’ve discovered is that the brain is full of multi-dimensional geometrical structures operating in as many as 11 dimensions.".

This idea of "dimensions" are geometrical dimensions is replayed throughout the article. However, right in the introduction of the paper [2], the authors explain what they really mean by "dimensions" :

Networks are often analyzed in terms of groups of nodes that are all-to-all connected, known as cliques. The number of neurons in a clique determines its size, or more formally, its dimension.

Evidently, these do not relate or encode some set of hidden geometrical dimensions that the brain is able to process and that we are not able to perceive.

So now onto what the paper actually talks about ... We know that in the brain, neurons connect together and send each other impulses (spikes) based on stimuli or general network activity (this can be caused by trying to recall a memory, recognising a visual stimulus, processing logic, etc ...). This connection behaviour can be modelled as a graph : each neuron is a node, who can be connected to an arbitrary number of other nodes. Information is transmitted through this graph to accomplish any number of tasks mentioned earlier.

What Henry Markram's team was able to do is use topological arguments to find the depth and distribution of millions of these graphs in the brain. This is quite impressive because it is very difficult to isolate logical groups of neurons within this dense network. Yet using this mathematical framework on the graphs they can achieve this isolation mathematically rather than physiologically. They are also able to model gaps, i.e. groups of neurons that don't communicate together, hence revealing the larger structure as well ! They can even compute their depth, which is thought to be an important characteristic for their ability to encode more abstract information : the deeper the more abstract features they can represent, similarly to Artificial Neural Networks [4, 5].

Once again we had here an example of shaky scientific vulgarisation. Though it is extremely important that scientific research is communicated to the masses, it is important to do so with scientific rigour and method: don't interpret what you don't fully understand, don't exaggerate facts and state the hypotheses within which you make these claims.

It is crucial that people be interested in scientific advances, but it is just as important that they are not mislead. This is especially true nowadays, where myths about many domains (from AI to climate change) are widespread and could do with a little measure and a sprinkle of extra facts ...

Disclaimer I'm not claiming to be an expert on the subject, just a student, and I only want to express my frustration when I see approximate renditions of cool scientific advancements. Nevertheless I'm happy to answer your questions about neuroscience to the best of my abilities or spark a discussion about how science is communicated to the public. Do you have examples of good (or bad) scientific communications ? Please share them or start a discussion in the comments !

[1] https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html
[2] http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full
[3] https://futurism.com/scientists-discover-that-our-brains-can-process-the-world-in-11-dimensions/
[4] http://kvfrans.com/visualizing-features-from-a-convolutional-neural-network/
[5] https://cs231n.github.io/understanding-cnn/

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Thanks for a nice article : ) I have some views about neural net. True that some features of neural net are inspired by neuroscience. Yet, the modern researches on neural net are mostly on the basis of computer science, math and statistics. Some are on the basis of neuroscience, but not many of them. I will say neural net is a universal function approximator rather than a model of human brain. So looking at the intermediate layers of a neural net can only help us understand how a neural net perceives the input, but nothing about our brain. But for sure understanding more about our brain can help shape the models that we are using : ).

That's very true : we draw inspiration in the design of artificial neural networks from biological inspiration. This is what McCullogh & Pitts tried to do when they formalised the first version of an artifial neuron [1]. Like you mentioned, the current advances in NNs are driven by tried to better understand the statistical properties of this function approximators in order to make them better (and faster / more efficient) because we still need to run these things :D

Nevertheless, we do in fact suppose that some of the behaviour we have observed in neural networks' intermediate representations can help us understand the way our brain models the information at different levels of abstraction. The paper mentions this at the end of the discussion section :

We conjecture that a stimulus may be processed by binding neurons into cliques of increasingly higher dimension, as a specific class of cell assemblies, possibly to represent features of the stimulus (Hebb, 1949 [2]; Braitenberg, 1978 [3]), and by binding these cliques into cavities of increasing complexity, possibly to represent the associations between the features (Willshaw et al., 1969 [4]; Engel and Singer, 2001 [5]; Knoblauch et al., 2009 [6]).

This is very similar to what is observed in ANN, most notably CNNs (because images are fun to look at :p). I've linked the appropriate papers below if you're curious to check this out !

Thanks for the comments @manfredcml !

[1] http://www.mind.ilstu.edu/curriculum/mcp_neurons/mcp_neuron_1.php
[2] Hebb, D. (1949). The Organization of Behaviour. New York, NY: Wiley & Sons.
[3] Braitenberg, V. (1978). “Cell assemblies in the cerebral cortex,” in Theoretical Approaches to Complex Systems, eds R. Heim and G. Palm (Berlin; Heidelberg: Springer), 171–188. Available online at: http://www.springer.com/cn/book/9783540087571
[4] Willshaw, D. J., Buneman, O. P., and Longuet-Higgins, H. C. (1969). Non-holographic associative memory. Nature 222, 960–962.
[5] Engel, A. K., and Singer, W. (2001). Temporal binding and the neural correlates of sensory awareness. Trends Cogn. Sci. 5, 16–25. doi: 10.1016/S1364-6613(00)01568-0
[6] Knoblauch, A., Palm, G., and Sommer, F. T. (2009). Memory capacities for synaptic and structural plasticity. Neural Comput. 22, 289–341. doi: 10.1162/neco.2009.08-07-588

Nice article!
When I hear scientific click-bait I can only think of "Futurism" facebook page that mastered the art of click-baits in the scientific domain.
I think I also came across their article about the 11 dimensions the brain can encode...
I think this miscommunication between the scientific world and the people is a major problem. Making people believe in sometimes false informations like "we only use 10% on our brain", it not very serious in general, but sometimes this wrong comprehension of our progress can lead to fear.
I'm referring to all these discussion about A.I. when for now AI are in general simple regression algorithms or basic "artificial neurons" networks (There again an artificial neuron is often just a matrix multiplication) and still very far from the terminator like super robot people are imagining.

Unfortunately, this is a general issue in reporting, whether it is scientific or not. It is difficult to get people interested in nuance, in incremental changes, so flashy headlines are the norm. This is a difficult problem to solve though ...

The vocabulary we use to talk about these issues is also very important : when you hear "Artificial Intelligence learns how to read", there is an implicit tendency to associative all the cognitive functions that are associated with "Intelligence" and "reading", leading people to believe that there is some basic sentient being that has actually learned to read. This is of course completely wrong : a machine learning algorithm was able to make sense (find correlations) between a big fat set of numbers. Very cool indeed but there is no sense of meaning or purpose for it.