Tuesday, September 11, 2012

Exploring memory's penumbra


In Matter and Memory, philosopher Henri Bergson had an interesting argument about the importance of memory for perception. Briefly stated, he suggested that memory must be involved in every act of perception, for if it wasn’t, every time we perceive something it would be as if we were perceiving it for the first time. But if that was the case, then every perception would amount to a new learning experience, and we clearly don’t learn afresh each time we perceive. Of course, this argument does not work for all sorts of reasons—including, for instance, the fact that perceiving something for the first time and learning are two different things, or the fact that often learning actually involves the repeated perception of encoded material.

Nonetheless, there is an important kernel of truth to Bergson’s reasoning. Somehow, when we perceive, memory must be able to tell whether what we are perceiving is new or is old, for if it is new chances are you need to encode it, and if it is old, you may need to integrate it to information already stored. Following Marr’s (1971) characterization, computational neuroscientists call the first process “pattern separation”, referring to memory’s capacity to encode information in a way that does not overlap with previously stored representations in order to prevent interference and/or overwriting effects. In contrast, the second process, which is known as “pattern completion”, refers to memory’s capacity to incorporate newly acquired information into existing representations. What is remarkable about these two processes, is that both appear to be dependent upon the same brain structure: the hippocampus. How does the hippocampus manage to differentiate between these two processes? One of the most promising hypothesis suggests that the hippocampus can bias the processing either toward pattern completion or pattern separation via neuromodulatory mechanisms (e.g., Hasselmo et al, 1995) that are highly dependent on the encoding context.

In a recent paper published in Science, Duncan, Sadanand and Davachi (2012) tested a prediction that follows from this hypothesis. But first, full disclosure: I look forward to Lila Davachi’s papers with almost the same exhilarating anticipation with which, I presume, fans of K.J. Rowlings must’ve awaited for each volume in the Harry Potter series. Her studies are usually methodologically impeccable, and her questions are, well, pretty great. Which explains why I was so eager to read the Duncan et al. paper—and it didn’t disappoint. This paper reports three experiments that capitalize on the fact that the neuromodulators allegedly responsible for biasing the hippocampal processing are relatively slow (Hasselmo and Fehlau, 2001). As a result, Duncan et al hypothesized that “if switching between pattern completion and separation biases is, in fact, mediated by hippocampal neuromodulatory input, it follows that a processing bias should linger in time and, thus, influence subsequent mnemonic processing” (p. 485). In other words, they conjectured that if such neuromodulators are indeed responsible for switching between pattern completion and pattern separation, then right after each switch takes place, the hippocampus must remain slightly biased toward the kind of computational process it was just in.

So, in the first experiment, participants studied a bunch of pictures of objects. Then, during retrieval, they were presented with three kinds of pictures: studied objects, non-studied objects, and objects that were similar but not identical to those studied. Participants were asked to respond “old” if the object had been studied, “new” if the object hadn’t been studied and was NOT similar to a studied object, and “similar” if the object wasn’t studied but was similar to a studied object. Accordingly, Duncan et al. hypothesized that if the memory system was biased toward pattern completion, similar but non-identical pictures would be more likely to be wrongly identified as old, since the memory system would be in the “mood” for incorporating similar information into already stored memory representations. However, if the memory system is biased toward pattern separation, similar items would be more likely to be judged as new, as the memory system would be in the “mood” for highlighting differences between the perceived stimulus and stored representations. To test this idea, Duncan and collaborators compared similar trials that were preceded by “new” responses versus “old” responses, the idea being that during “old” responses the memory system is biased toward pattern completion, whereas during “new” responses it is biased toward pattern separation. And lo and behold, they found that if similar trials were preceded by “new” responses there were fewer false alarms that when they were preceded by “old” responses—which is really cool. Notice that I said "responses" and not "trials" because the effect was evident not only when participants correctly identified old objects as old and new objects as new, but also when they false alarmed, suggesting that the subjective memory decision rather than response accuracy was responsible for the processing bias—which is even cooler.

The second experiment was identical to the first experiment except that this time Duncan and collaborators wanted to see how long this bias—this “memory penumbra”, as they called it—would last. So they varied the time between stimulus—the “interstimulus time interval” or ISI—between 0.5, 1.5 and 2.5 seconds, and found that the effect was time dependent, being only apparent with short but not long ISIs—which is really the coolest. Remember how I mentioned that one of the most promising hypothesis about the way in which the hippocampus shifts from pattern completion to pattern separation rests on a bunch of possible neuromodulators? Well, as you may imagine, there is some controversy as to which precise neuromodulator may be responsible for the shifting, and one of the reasons for this controversy is the varying temporal scales of the possible candidates. So what I really like about this result is that the decaying time of the effect coincides with the temporal scale of acetylcholine, one of the most likely candidates for this sort of neuromodulation! So here you have it: a beautiful piece of behavioral evidence, in humans, that simultaneously speaks to a computational hypothesis and a neurobiological hypothesis about memory retrieval. Pretty cool, ah?     

The third experiment is slightly different, and it requires a bit more background, so I’ll talk about it in a subsequent post.






Wednesday, August 22, 2012

Just a little bit of hippocampus, thanks.


Fifty-nine years ago, Henry Gustav Molaison—better known as H.M.—underwent a bilateral medial temporal lobe (MTL) resection that included the hippocampus. Ever since, and thanks also to several other studies in human and non-human animals, the hippocampus has been tightly associated with memory. You loose your hippocampus, you loose your capacity to consolidate new memories. End of the story. And if recent meta-analyses revising the role of the hippocampus and adjacent regions during episodic memory retrieval are correct (e.g., Nadel and Moscovitch, 1997), extensive damage in the MTL may also result in complete incapacity to recollect events from one’s past. You loose your MTL, you loose your capacity to remember the past. End of the history. So it is not surprising that, for the last half a century, the hippocampus was as closely tied to memory as Broca’s area was to language production.

Perhaps for this reason the recent observation that the MTL (especially the hippocampus) plays a critical role in future simulation constitutes such a surprising discovery. In 1985, Endel Tulving observed that patients with amnesia due to hippocampal damage had trouble coming up with vivid simulations of possible personal future events (Tulving, 1985). Is not that they were unable to give an answer to a question like “can you tell me how your next Christmas is going to be?” It is rather that their answers were formulaic, devoid of detail, kind of semantic: “I guess there will be a tree, and presents, and maybe family”, and not what one would expect from an individual that can run, as it were, a detailed episodic simulation of a possible future Christmas in her mind: “I guess I’ll see aunt Annie with her loud voice, drinking her gin and tonic, as always, and going about singing Christmas carols, because this year we have new nieces, you know? And she’s going to be all over them…” No such mental movie. When asked about specific details of their mental simulation of future events, individuals with amnesia are unable to articulate detailed descriptions of what they are imagining, limiting their answers to “that’s all I see” or “nothing else comes to mind”. The observation that the hippocampus plays a pivotal role in personal future thinking has been further corroborated by numerous behavioral and neuroimaging studies.

However, in 2010, Larry Squire and collaborators tested a handful of individuals with hippocampal damage on their ability to construct mental simulations of possible future events. Surprisingly, they found that their capacity to think abut possible personal futures did not differ from controls. What gives? In response to this report, Mcguire and Hassabis (2011) observed that the patients used by Squire and colleagues had some remnant hippocampal tissue that may have been sufficient to support the construction of future simulations. But this was merely a conjecture. Empirical evidence was needed.

In a recent paper, Mullally, Hassabis and Maguire (2012) report such evidence. As it turns out, one of the patients (P01) studied by Hassabis and collaborators in 2007, despite having extensive hippocampal damage, was nonetheless able to produce relatively detailed descriptions of mental future simulations. Compared with the other patients, though, P01 had some remnant hippocampal tissue. Could it be possible that this difference really made such a difference? To answer this question, Mullally et al (2012) asked P01 to visualize possible future scenes while undergoing fMRI. The results were astonishing: the little bit of preserved right hippocampal tissue was very much engaged during future scene construction. In addition, all other regions engaged in episodic future thinking overlapped with those that were engaged by the control group. Moreover, the activity of said regions coupled with that of the hippocampus when successful episodic future simulation was achieved. Again, you loose your hippocampus, you loose your capacity to vividly think about your personal future in a detailed manner. But if some hippocampal tissue is spared, maybe you won’t.

What I really like about Mullally et al’s (2012) study, besides the very interesting result, is the underlying structure of the argument with which the data is put forth as evidence for the hypothesis. Normally, in neuropsychology, it is assumed that the ultimate source of evidence for a particular brain region being necessary for a certain cognitive process is double dissociation. Initially introduced by Teuber in 1955, the notion of double dissociation was supposed to help elucidate whenever two processes were orthogonal to one another, provided the researcher could show that two experimental manipulations could differentially affect two independent variables. In cognitive neuroscience, this principle is implemented by way of showing that damage to brain region A impairs process X but not Y, whereas damage to brain region B impairs process Y but not X.  But double dissociation cannot show sufficiency. Usually, claims about sufficiency are much harder to come by, and when you do, they are usually questionable and pretty local. What I find so impressive about the Mullally et al (2012) study is that they conjectured that a specific brain region, in this case the very posterior tissue of the right hippocampus, was sufficient for engaging the brain network required for future simulation. Thus, unlike demonstrations of double dissociation, in which the relevant brain area is shown to be necessary but not sufficient for a particular cognitive process to occur, Mullally and collaborators (2012) managed to demonstrate that a portion of a certain brain region, which is normally functionally connected to other neural areas to support future thinking, is sufficient to engage such a network. A highly recommend paper, which orchestrates careful neuropsychological assessment and skillful neuroimaging analysis.







Saturday, August 11, 2012

Let the blogging begin


Not that the world needs another neuro-psycho-philo-blog. But there is a surge of wonderful work in the philosophy, psychology and neuroscience of memory and imagination that may be of interest to some readers, who might be totally or partially unaware of such developments. So I’ve decided to overcome my fear of making grammatical and stylistic mistakes in the public arena of the bloggosphere in English, and I decided to start this blog. Mind you, though: I have an agenda. I believe research in the psychology and cognitive science of memory and imagination is lending strong credence to the view that these aren’t single faculties, that the cognitive processes that compose them are intertwined, multifarious and complex, and—more importantly—that the best way to understand the functional roles of specific brain regions requires moving away from the view that there is a clear correspondence between brain functions and psychological functions. In fact, I believe that trying to find neural correlates for X, where X is a folk psychological term is almost always the wrong way to go. The rules that govern the functional structure of the brain are not the same rules that dictate the meaning and uses of folk psychological terms. Sorry Professor Armstrong, but if you were to have all the platitudes of folk psychology pinned down, the job of understanding the mechanisms that fulfill the functional roles that correspond to such platitudes wouldn’t have even started. And maybe, only maybe, psychological readings of the massive modularity hypothesis will be jettisoned, and people would again pay more careful attention to Lashley. But I’m getting ahead of myself.