We live in a chaotic world where life-saving information rapidly dissipates together with our chances of surviving in such a harsh environment. Nonetheless, the nervous system features an astonishing degree of behavioral and neural plasticity allowing us [i] to rapidly respond to the changes in the environment and [ii] to transform labile events into persistent memory traces.
The central role of memory processes
Memory is a core pillar of every cognitive systems and its the scaffolding that allows human beings to develop a self-consistent life. For this reason, understanding how the neural tissue builds and stores memories is considered one of the most fundamental goals of neuroscience research.
The untold assumptions about memory research
The majority of us doing research in the memory field work under some pretty strong -sometimes implicit- assumptions about how memory works. Let me guide you through some examples that crossed my mind recently:
The information processing perspective
The brain transforms both external and internal stimuli into information. This assumption is very convenient: it frames the brain and its activity into a “tractable” problem . Decomposing brain acitivy in functions specilized in the processing of bits of information. It allows us to use the scientific method to study brain functions as separate entities (attention, memory, perception …). It is the basis of congitive psychology and its the dominant approach in the animal literature investigating … well … cognitive functions.
Attention parses the stream of information that flows into our brain and selects relevant bits for further processing
The brain is flooded with a constant stream of information. The joint action of top-up and bottom-up processes allows the selection and extraction of ‘meaningful’ chunks of information. In analogy with restriction enzymes cutting DNA strands, attention cuts a ‘self-consistent’ piece of information (sequence of bits) that is the best approximation of what happens outside of the brain.
Memory is responsible for the preservation of selected chunks of information
This is where things become interesting. If memories are equated to well-defined chunks of information with (almost) physical boundaries then some spurious questions emerge:
[A] Capacity: How many of these finite elements can we store in our finite brain?
[B] Location: where do we store them?
[C] Implementation: how do neurons/circuits support these memories?
A disappointing description of how the brain works
At this point you may be horrified by this description of ‘putative’ (information-based) events happening in the brain. Or maybe you are just nodding (you some how agree with the above description) and you’ll keep reading this post.
However, this is exactly the point I want to make: neuroscientists do not agree on some core concepts and frameworks we implicitly use in our research. I believe that many scientists fail to recognize the implications derived by the adoption of an information processing perspective in neuroscience research.
I’m not saying that the information processing approach is wrong. All I’m saying is that there should be more discussions in neuroscience about the basic concepts/frameworks we adopt to study the brain.
Do try this at home!
 Pick a bunch of your neuroscientists.  Find a fairly recent paper describing some cool results about memory (preferably one with optogenetics, one that uses complex viral techniques and it has been published in a high impact-factor journal … 99.99% of these papers are implicitly based on the information processing approach). It’s critical that you all agree on the interpretation of the results.
Then ask yourself and your colleagues some basic questions about how the brain works:
Are external (and internal) events translated into information?
Do you know what the information theory says?
Is the formal definition of information useful for the study of the brain functions?
if yes, at what the scale (dendrites, cells, circuits, cortical columns, cognitive processes …)?
Does a single memory have an informational representation?
What is “A memory” ?
Will we be able to decode to the last bit how a memory is stored in neural circuits?
Chances are that even if you all agree on the paper’s conclusions, you may have divergent opinions on the core framework on which the paper is loosely based on (i.e. information processing approach).
What is a memory?
I am not criticizing the information processing approach at all. All I’m saying is that many of the questions we are unsuccessfully addressing in memory research may be the spurious product of a conceptual framework (information processing approach) that we IMPLICITLY use in our everyday work : Does it make sense to search for the neural basis of memory? What is amemory trace? what is an engram? engram technologies? Do we need a pseudo-formal definition of what an engram is? Is this useful?
Maybe. However, as you’ll see in my next post there is VERY LITTLE agreement among scientists on what ENGRAM and MEMORY TRACE mean and how they are used in the literature!
BTW a search for memory trace leads to the wikipedia page about the engram. A pretty strong assumption – and not everyone agrees with this overlapping 🙂