Neuwrite Philly

Scientists & writers writing about science

Author: neuwritephilly

Neurocritics vs. Cognitive Neuroscience: Promises and Perils of fMRI

When it comes to understanding our minds, the role of science is new—making it both exciting and, at times, perilous. Can we trust this new science of the mind? Scrutiny of cognitive neuroscience, in particular, has intensified in recent years through the industry of “neurocriticism.” It began as a response to the extraordinary hype given to neuroimaging findings—in both general-audience media and scientific publications. Blogs like Neuroskeptic and Neurocritic, articles in the Guardian, the Globe and Mail, and the New York Times, and entire books like Neuromania, lament extravagant claims made on the basis of imaging the brain. Have we really found the neural basis of “love”, they ask? Can we really say that dogs feel emotions just like we do? But the strongest neuro-critical opinion is not just that findings are misused, but that the very data are flawed: that we can’t find the neural basis of anything with neuroimaging tools like fMRI.

How seriously should we take the neurocritics? Why, exactly, do so many neuroimaging studies fail to support their conclusions? Gary Marcus, a cognitive scientist at NYU, writes of the neurocritics that they “are half right: our early-twenty-first-century world truly is filled with brain porn, with sloppy reductionist thinking and an unseemly lust for neuroscientific explanations. But the right solution is not to abandon neuroscience altogether, it’s to better understand what neuroscience can and cannot tell us, and why.” This is probably right, but it merits understanding what really is wrong, what is right, and how we can fix it.

It’s not even neurons, or—termites inside a log

A take on everything wrong with fMRI was published in Vox last year, and under its list of complaints, it notes that fMRI does not even measure neural activity, but rather “follows oxygenated blood as it flows through the brain.” Let’s be very clear: fMRI is not fast enough to “follow” this flow. Rather, it takes a one-to-two-second average snapshot of the state of oxygenation of the brain—because that’s how long it takes to acquire the measurements. Oxygenated blood flows much faster than this, so we can’t very well follow it along. Ok—but is either a problem, anyway?

No. Blood oxygenation turns out to be a very good proxy for the concerted firing of neurons. It largely reflects the operation of the brain in these kinds of experiments. So in order to relate cognitive abilities to the functioning of the brain, there’s really nothing wrong here. Recording individual neurons is very invasive in humans, and cannot be done except in certain clinical cases, so this is often the best we have.

However, it is true that the spatial resolution of fMRI is much worse than neural recordings. Each measurement with fMRI reflects the aggregate activity of many thousands of neurons. What are the consequences of this? Let’s make an analogy. Imagine we are interested in the activity of termites inside a log, and we can’t see inside this log to identify the termites, but we can get a measure of the holes and dust-piles on the surface of the log. More termite activity means more and bigger holes, so if we are interested in asking how termites respond to say, air humidity, we can ask whether drier air means more holes or less. Of course, under each location on the log, there may be hundreds of termites deep down, and we don’t know what they are individually doing. This means we can miss things: maybe some termites are working very hard but others fight them and stop their work, so we can falsely conclude that none of the termites are working. But we still expect our experiment on air humidity to tell us whether termites tend to work more or less when the air is dry.

Likewise, to the extent that any cognitive phenomena—vision, memory, reasoning—take place at large scales (involve many neurons), we can detect them with fMRI. Researchers have used this method to relate vision to the occipital lobe, and memory to the hippocampus—relations we know are true based on work with neurological patients, who lose vision or memory functions when those areas are damaged. In other words: there are large enough functional areas that we can use fMRI to study them.

We don’t know in advance which mental operations will be detectable at the large scale, but the worst outcome is that we can’t see some of them. That just isn’t a reason to mistrust fMRI. Vox does describe the deeper issues: fMRI lets you “learn what broad areas of the brain are working. But figuring out what, exactly, those brain areas are doing [functionally] is a totally different problem.” Different problem it is, but it’s not a problem that comes from lack spatial resolution: rather, from cognitive resolution.

The cognitive resolution 

It is tempting to imagine an experiment going like this: a subject enters an MRI scanner and the readout shows a quiet, gray brain. Then, some cognitive task begins: the subject is asked to count the number of dots on a screen. All of a sudden, a bright orange blob emerges in the parietal lobe. Aha!

The truth is just the opposite. Signals from the brain look more like tv static—at all times in all regions. The ‘pictures’ are made through careful statistical analyses that separate signal from noise. This point is made by neurocritics (in SciAm, Statnews, Quartz), but the reason it’s a problem warrants exposition. Since fMRI signal reflects the relative oxygenation of the blood, and blood is always oxygenated to some degree, there is always signal—even when a subject lies quietly in an MRI scanner. And it fluctuates with breathing, heartrate, attention, mind-wandering, and scanner noise—all factors unrelated to the experimental task. When the participant begins to count the number of dots on the screen, changes related to this task are a small fraction of those larger fluctuations. To separate them out, we have to use statistical comparison. And what stops us from figuring out what brain areas are doing is what enters into that comparison.

In the termite example, we wanted to know how humidity affects termite activity, so we had to be certain that when we changed humidity, humidity is the only thing that changed—not temperature, not air quality. Same with cognitive variables: does a brain area do memory? Isolate memory. Does a brain area do counting? Isolate counting. This is cognitive resolution—and it’s much harder than issues of statistics or noise.

Take the task of counting. It involves not just what we think is special about it (incrementing numerical quantities) but also much that isn’t special to it: seeing and distinguishing the objects we want to count, retrieving the names of numbers, and articulating those names. Giving subjects these tasks to perform in the scanner requires us to compare them to closely matched tasks—tasks without numerical quantities, but with naming, articulation, and object perception. While a major challenge, cognitive neuroscience has been successful in discovering specialized areas for face recognition, navigation, and reasoning about others’ minds; these have all been products of clever experimental design. (See Nancy Kanwisher’s TED talk for an inspiring review).

 The center for love and other impossibilities

According to the argument above, the reason we can’t find the “love” center of the brain is because love is difficult to isolate from other variables (say, heart rate, positive emotion, familiarity, and so on). But this is not usually the explanation given by neurocritics, who argue against the possibility of such centers in principle.

Vaughan Bell describes the critical flaws of fMRI as this: “All of our experiences and abilities rely on a distributed brain network and nothing relies on a single ‘centre’”. It’s not possible, she argues, to relate any specific cognitive variable to any specific location in the brain, because all brain areas do many things, and each cognitive variable relies on many brain areas. Others advocate giving up on localization entirely: they applaud research which shows how the entire brain responds to different pictures or words without any attempt to distinguish individual areas or relate them to specific cognitive variables. Such views generally suggest that cognitive neuroscience, as described above, is hopeless.

It’s not clear that they are right. Some cognitive variables are linked to specific brain areas: face recognition, spatial navigation, audition, motor control. But not every category that we think of is. If we find that ‘love’ engages many areas, none of them specific to it—we probably have the wrong category. Not the wrong method.

Gary Marcus, quoting Dave Poeppel, argues alone these lines, saying that what we need is “the meticulous dissection of some elementary brain functions, not ambitious but vague notions like brain-based aesthetics, when we still don’t understand how the brain recognizes something as basic as a straight line.” Until we get to the right elementary functions, it will seem like function mapping is impossible. While we can throw out the idea that “love” is one of them, it isn’t worth tossing the rest of cognitive neuroscience along with it.   

Dead salmon and fishy software

So let’s imagine that you’ve found the right cognitive function to target and have designed the perfect experiment to isolate it. No more roadblocks, right? Unfortunately, some of the most recent and toughest criticisms of neuroimaging are about something else entirely: the statistical analyses of the data. For example, a paper last year led to a barrage of news articles suggesting that a software flaw could “invalidate up to fifteen years worth of neuroscientific research”; this was hardly new, as Neuroskeptic noted. Such problems are seen as inherent to fMRI.

Let’s start with an old scandal: that of the dead salmon. Craig Bennett and a team of researchers put a dead fish into the scanner, showed it different pictures, and measured its brain with fMRI. They ran the data through very commonly used analyses, and, lo and behold: they found a spot in the salmon’s brain which was more active for some pictures than others. But the salmon was dead and hadn’t seen any pictures: this result was impossible. The analysis must have been wrong.

The flaw in these analyses was a proliferation of “false positives”: seemingly significant results actually due to chance. Although the salmon’s brain wasn’t functioning, the fMRI scanner was still picking up on data—except this data was pure noise, the “tv static” caused by random magnetic fluctuations in the environment. Normally, random noise should not lead to significant results; tests are designed so that this happens less than 5% of the time. But, if you run enough tests on enough noise, some will by chance turn out “significant”. Avoiding this requires a correction: setting a higher bar for calling something significant. In fMRI data, there are many tests—one at each spatial location—but the correction wasn’t done. This problem is hardly unique to neuroscience: it similarly plagued genome-wide association studies until they, too, recognized the extent of the problem.

This old issue resurfaced recently when it turned out that these corrections weren’t implemented correctly in some common software packages. The conclusion in the news: fMRI is broken.

Take this account, in Quartz: “fMRI scanners rely heavily on software and statistical tests to eliminate background noise […] these software packages and statistical tests have to make a lot of assumptions, and sometimes use shortcuts.” Or this account of the problem: “Researchers […] need to learn how to use the software, but most neuroscientists are not software engineers and so they have to trust that the software works as advertised.” Either the scanners or the software engineers are to blame, it is suggested; the scientists can’t fix their analyses. Not only is this belittling and false, it also makes the problems seem inherently “built-in” to fMRI scanners themselves. That is far from the truth: already the fixes are being made, by the scientists themselves.

So what is wrong with fMRI? Fingers point to the scanners, the statistical software, the inherent impossibility of mapping the mind to the brain, the limitations of spatial scale. But these are not the real culprits. Rather, the problems are the growing pains of a young science ironing out its flaws—which are not inherent to the method, but ones that with a healthy dose of skepticism can be circumvented.



What Aliens tell us About Science in the Media

Recently, news from outer space shocked the world. A mysterious radio signal from over 95 light years away had scientists in a frenzy. The Russian Academy of Science detected the signal which originated from a 6.3-billion-year old star with a single, Neptune-sized planet. Scientists from the Search for Extraterrestrial Intelligence (SETI) indicate that the signal, if artificial, suggests the presence of an alien civilization far superior, yet very close, to ours!

Sensationalized news like this is far too common. The radio signal is nothing special, no more than a small drop of water in a monotonous sea of noise. Curious chance signals have happened in the past, will continue to happen in the future, and leave us no closer to discovering intelligent life. But you wouldn’t get that impression from reading popular news articles about the recent finding. The Observer wrote that the “implications are extraordinary.” CNN discussed advanced Kardashev Type II civilizations with Dyson spheres that harness energy from stars, all of which are purely science fiction ideas. In reality, the signal is likely to be naturally occurring, or even human-generated. Astronomer David Vakoch from SETI stated that “a putative signal from extraterrestrials doesn’t have a lot of credibility.”

Such fanfare for this rare finding isn’t evidence of our enthusiasm for aliens, but rather indicates a larger issue of how science is reported in popular media. Too frequently I hear tentative scientific discoveries being reported as facts and taken to absurd ends whose “implications are extraordinary.” Media outlets often misconstrue studies to report headlines like cell phones give you cancer or the dangers of gluten, when in reality such ideas are at best scantily supported and often refuted by scientific studies. But you don’t need to tune in to Dr. Oz to find such flagrant abuse of science. I go no further than my Facebook feed to be blasted by articles with profound implications only loosely based on science.

I wish I could say that this type of reporting is always as benign as overzealous claims of alien discovery. But this practice of using isolated scientific studies to promote faulty claims is at the heart of many impactful issues. Former President Obama recognizes how easy it is to spread misinformation on social media, and the dangers it presents with respect to climate change. He elaborates in an interview with The New Yorker, “an explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers’ payroll.” Such misinformation impedes progress dealing with a matter in which there is a clear scientific consensus, and that poses a significant threat to humans and global ecosystems in the future. But people still debate the human impact on global warming and what actions should be taken due to cherry-picked data that undermines its significance.

The same goes for anti-vaxxers and autism. A single 1998 study, which has since been deemed fraudulent, is still being used to spur fear and opposition in people uninformed on the research. Healthy people that refuse vaccinations present a major health concern for those with compromised immune systems, such as patients with AIDS, cancer, or autoimmune disorders. Simply using phrases of anonymous authority, such as “studies show” or “scientists say,” seems to be license enough to promote one’s own agenda or ideology, whether scientifically founded or not.

The frequency of such headlines is unsurprising since they are effective in generating viewership. We are naturally drawn to flashy headlines and bold claims. I admit I clicked on the link “Not a drill: SETI is investigating a possible extraterrestrial signal from deep space.” Would I have clicked on a link titled “Scientists not intrigued by miniscule, fleeting radio wave from star HD164595”? Probably not. But a quick read revealed that the news was likely insignificant, and I moved on. Other findings are not as clear cut, however. Studies can be very technical, confusing, and nuanced, even to those familiar with the field. Entire careers are spent sharpening the skills needed to effectively read scientific publications, so it would be unfair to ask the same of the general public.

Additionally, not all studies are created equal. Competitive tenure, funding, and publishing opportunities for researchers promote p-hacking and de-incentivize replication studies. Altogether, science is a slow process in which a single isolated study is likely trivial. Readers have limited options and few tools to find reliable science news, navigate the information presented, and parse fact from fiction in science journalism.

So how can we reduce the frequency and impact of misleading headlines, and increase general scientific literacy? Well, not all scientific reporting is bad. Take, for instance, The Late Show with Stephen Colbert, which often runs segments featuring Dr. Brian Greene to explain recent findings in the realm of physics. I saw a segment earlier this year where he explained the discovery of gravitational waves, not too long after news of the finding made a splash in popular media. As a professor of physics and mathematics at Columbia University, Dr. Greene is a proven source of reliable information on those subjects. He uses simple analogies, graphic displays, and model experiments to explain otherwise unintelligibly complicated concepts. This skill of effective communication is something that many scientists strive for but rarely perfect. Encouraging scientists to hone those skills as part of basic research training could lead to better and more direct communication from scientists to the public, improving the public’s scientific literacy.

Other examples of popular shows conveying scientific ideas are Neil deGrasse Tyson’s revamping of Carl Sagan’s Cosmos, or even John Oliver’s segment on this exact topic. These social platforms and likeable TV personalities interest and engage the public. Combining engaging media journalism with effective scientific communication helps reduce the mystery of, and perhaps opposition to, the scientific process.

The same media that makes faulty scientific reporting so prevalent can also make reliable scientific reporting more accessible and entertaining. If scientists and media alike made it a priority to communicate effectively with the public, science itself would not be as much of an enigma and there would be less room for fantastical interpretation. Additionally, scientists could use social media and popular news outlets to guide discussion and move forward in areas where there is a clear scientific consensus. This would lead to a more scientifically informed public, less vulnerable to the economic motives of climate change opposition or the unfounded fears of anti-vaxxers. Not only that, but a scientifically literate community is 50% less likely to fall for online clickbait and flashy headlines…or so “studies” show.

Article by: Aaron Williams

Aaron earned his B.A. in Human Biology from Stanford University. He then moved to Philadelphia to earn his combined MD-PhD degree from the University of Pennsylvania, where he is interested in neuroscience. In his research, Aaron is curious about how the connectivity of neurons can be related to network function and behavior, and clinically he is interested in either neurology or neurosurgery. Aaron joined Neuwrite Philly because he believes science writing and communication are important aspects of any scientist’s career, and hopes to develop his own skills and those of others. Outside of science, Aaron enjoys playing basketball, reading, and watching movies.

© 2018 Neuwrite Philly

Theme by Anders NorenUp ↑