Better Motion Detection Improves fNIRS Brain Imaging
Understanding the brain is one the great challenges of modern science. The brain is relatively inaccessible, making direct studies rather invasive. Instead, we tend to rely on a view from the outside. Using brain imaging tools like MRI, scientists can assign different aspects of information processing to different brain regions.
MRI can indirectly detect brain activity by measuring blood flow to the brain. The brain is a very hungry beast and it regulates itself very strictly: if a neuron isn't doing anything useful, it gets just enough food and oxygen to prevent it from starving to death. When a neuron needs to activate, it requires oxygen immediately, so a change in blood flow and (de)oxygenated hemoglobin is observed. Hence, blood is a proxy for brain activity.
But, because these measurements are always made from outside, they are subject to motion artifacts—signals due to motion that are falsely attributed to brain activity. These artifacts are especially problematic for optical systems and have been addressed—at least in the case of speech-induced motion—by Novi and colleagues, as reported in Neurophotonics.
Measuring in comfort
It is difficult to imagine a less congenial setting for brain experiments than inside the claustrophobic click and whir of an MRI machine. This is where functional near-infrared spectroscopy (fNIRS) offers, in some ways, a better alternative. The concentration of hemoglobin (oxygenated or not) can be measured optically through the skin using near-infrared light-emitting diodes. fNIRS allows researchers to measure brain activity in more natural settings, where subjects are free to move around. But, that makes motion artifacts an even bigger problem.
Motion artifacts are generated because motion changes the connection between the light source and the skin, which changes how the delivered light scatters through the tissue (or whether it even enters the tissue). Likewise, motion changes the efficiency of light transfer from the tissue to the detector. Because the detectors are located at different distances from the center of motion, they will all display a different signal.
Searching for artifacts
To address this problem, Novi and colleagues set about cataloging the different artifacts due to jaw motion during speech. They observed that jaw movement introduced two different types of artifact: a rapid oscillation in the signal, and a shift in the average (or baseline) optical signal. Some detectors are only subject to one or the other artifact, while others are subject to a mixture of both. This nonuniformity reduces the effectiveness of traditional artifact detection algorithms.
It is common to remove baseline artifacts by fitting a spline—a short-range mathematical function—to the data and filtering out smooth changes to the spline over a characteristic period. But, the spline also interprets the oscillation artifact as a signal.
A second technique that is good for detecting oscillations—called wavelet transformations—was used to remove the oscillation artifact. Wavelet transformations are designed to look for repeating signals of different shapes that persist for different lengths of time.
By combining the two different artifact removal techniques, both types of artifact are removed with high accuracy. The researchers showed this by comparing their fNIRS results for subjects that read silently and then read aloud. In the first case, the motion artifacts should be minimal, while in the second the jaw is in constant motion. Besides that, there should be differences in brain activation, since reading aloud and silent reading are different activities.
What the brain imaging study nicely shows is a more detailed image of brain activation. Going from uncorrected to wavelet-only correction and to a combination of spline and wavelet corrections, the amount of apparently active brain decreases. Essentially, a more accurate picture of brain activity emerges as the motion artifacts are removed.
In this case, the researchers targeted only those parts of the brain associated with speech. And, the technique is only able to correct for jaw movement at the moment. A more extensive system that can remove artifacts due to a wider range of activities is something to look forward to.
Read the original research article in the peer-reviewed, open-access journal Neurophotonics: S. L. Novi et al., "Functional near-infrared spectroscopy for speech protocols: characterization of motion artifacts and guidelines for improving data analysis," Neurophoton. 7(1), 015001 (2020). doi: 10.1117/1.NPh.7.1.015001
|Enjoy this article?
Get similar news in your inbox