EEG Research Thoughts

Temporal-Spatial Decomposition in ERP Analysis : A General Introduction

Most of my work focuses on EEG related measures. While I do conduct some single-trial, time-frequency, electrotomographic and connectivity based analyses, it’s hard to disregard the rich body of work involving classic ERP components. I suppose the analogue to “Ahh, but does it run Crysis” in the EEG world might be “Ahh, but does it P300”? (the overlap in the Venn diagrams of folks who get those jokes is might be fairly small, but I think it’s funny so it stays!).

Focusing on the simple case of finding a P300, say to an auditory oddball paradigm. Imagine you went all out on equipment and collected some awesome 256 electrode high density datasets.

What next? How do you decide When and Where that P300 component arises? No problem! I’ll just look at the literature and use what other folks use.

A few hours later, you’re surrounded by papers proclaiming the P300 is 250-550 ms poststimulus. No wait, make that 400-700. No no, 300-500. Do I hear 250-500. You know what, 385- 600 ms sounds good (mad specific that one). Then you have the problem of deciding the spatial region to window down to. Do we go with your favorite electrode? Do you pick a region. How big a region? Going once, going twice.

Long story short, there are simply way too many degrees of freedom and issues present in these choices. Single-electrode measures squander the value of high-density measurements and are vulnerable to noisy data issues. Regions are slightly better but apart from huge clear components like the P300, it is difficult to objectively decide how big it should be. The temptation to muck around with subjective time and spatial windows until a significant effect emerges is a dangerous thing to have available if objectivity and replicability are desirable end goals (which they have to be if any field of research is to thrive). Furthermore, to get the most signal as possible from the data, something every researcher working with limited trials and participants wants, it seems like such a waste to “give up” signal from electrodes outside whatever arbitrary peak electrode cluster one declares. Wouldn’t it be great if we could just pull out all the “stuff” (i.e. variability) that belonged to the component we wanted to look at and quantify it in a single neat score?

I struggled with these issues from the very start of my EEG career when I was simply told, “use these 18 regions and run an ANOVA on the voltage in them by condition”. Barring all the previously discussed issues, on the surface that might seem somewhat reasonable but if you want to do anything advanced, it gets messy fast. Here we have 18 regions (i.e. a 18 level or some combination 3 * 6 or something of the like within-subject factor/s). Naturally these regions aren’t independent so their dependency has to be modeled. Compound symmetry is a pretty questionable thing to try given that neighboring regions are more likely to be interrelated so unstructured seems a good way to go. So before anything else is even entered into the model, we’ve got a massive 18 level repeated-measures variable with an unstructured covariance matrix to estimate in there. Needless to say, posthocs become messy as hell and p-value control is a nightmare.

In my view, temporal-spatial decomposition is one of the most reasonable solutions I’ve seen to the issue of ERP component extraction at the averaged trial level. The approach is delightfully straightforward, first stack all participants, conditions and recording sites as observations with time-points as variables and PCA that matrix.

Just like you would get from running PCA on a bunch of questionnaire item scores, the result of this process is a set of temporal factors on which the items (in this case, time-points) load onto. You can then take the loadings from any one of those factors and multiply them by the actual voltage to construct how that temporal component would look if it were pulled out from the full raw voltage fluctuations. So for instance, taking a single participant’s on an oddball task and running PCA on it and reconstructing the waveforms of the extracted temporal factors would give output like as follows (windowed to a single electrode for now, we’ll get to the spatial aspect in a bit)

Here we can see that now we have a set of 6 temporal factor scores for each of the electrodes entered into the analysis. The specific number depends on the selection criterion used. Parallel analysis is a decent one (i.e. run another PCA on a similar sized dataset effectively filled with random numbers and plot the variance-accounted-for numbers obtained from that against the numbers from the actual dataset) but personally I find a 95% criterion for temporal factors tends to work better. EEG is pretty noisy and at this stage, allowing some of that noise to be taken out in noise factors instead of being forced into the signal seems to work pretty well. What this step has effectively done is reduce the massive number of time points in this dataset’s case, 250 time points to a significantly smaller set of 6 temporal factors on which those time points load on to in varying degrees.

Pretty good progress so far. A keen eye would already have spotted a nice P300 response and what looks like a late slow wave. I favor a multi-pronged selection process which starts with looking at how much variance a factor accounts for, cohesion in scalp topography and temporal profile and how well those factors map onto prior work involving the component. Using just the first standard in this case (setting 10% as the minimum criterion), those two factors are the only ones that make the cut.

We can then plot these temporal factors by condition, and see a sweet, empirically extracted component waveform differences (noting of course that it’s unlikely to correspond 100% with the “true” component, there a bunch of nuance to this which I’m skipping for now, I recommend reading Dien (2010, 2012), Dien, Beal & Berg (2005) and Dien, Khoe & Mangun (2007) for loads of fun data on this).

Dien, J. (2010). Evaluating two-step PCA of ERP data with Geomin, Infomax, Oblimin, Promax, and Varimax rotations. Psychophysiology, 47(1), 170-183. doi: 10.1111/j.1469-8986.2009.00885.x
Dien, J. (2012). Applying principal components analysis to event-related potentials: A tutorial. Dev Neuropsychol, 37(6), 497-517. doi: 10.1080/87565641.2012.697503
Dien, J., Beal, D. J., & Berg, P. (2005). Optimizing principal components analysis of event-related potentials: Matrix type, factor loading weighting, extraction, and rotations. Clin Neurophysiol, 116(8), 1808-1825. doi: 10.1016/j.clinph.2004.11.025
Dien, J., Khoe, W., & Mangun, G. R. (2007). Evaluation of PCA and ICA of simulated ERPs: Promax vs. infomax rotations. Human Brain Mapping, 28(8), 742-763.

This is great but we’re still left with the problem of spatial dimension, note that the waveforms above are from a “peak” electrode. In my case I have 256 electrodes. As discussed previously, how do we determine the “best” electrode or electrode cluster to use to represent each of these extracted temporal components? The answer, all of them with weights on their contribution varying based on how much of their signal contributes to that component. Revisiting the data structure, we just flip do a lil’ rotation and end up with :

Then we can PCA that to extract a set of spatial factors from the solution. One can think of this as taking that full electrode structure and representing it with a smaller set of virtual electrodes that do a pretty decent job of representing the spatial variability observed on the scalp). So just like the temporal solution, our set of 256 electrode gets reduced to a smaller set of “virtual” electrodes (in this case 4) on which the 6 previously extracted temporal factors load on to in varying degrees (note : We do have to move to the full set of participants to do both the spatial and temporal rotations one after another).

The result of this process is a set of temporal-spatial factors. Each of this corresponds to a single temporal factor (which captures the temporal dynamics of a single component) expressed on a single virtual electrode (which captures the spatial dynamics of the full set of electrodes based on how the spatial patterns of the temporal factors express themselves). In other words, we’ve empirically extracted a set of components with unique temporal and spatial characteristics that we can plot and evaluate with each component being represented by a single analyzable number. Better yet, each participants has a personal score (remember we never collapsed that) for each condition (again not collapsed in our analysis). You can then run any sort of analysis (with the usual caveats of making sure it appropriately treats the properties of component scores) on those numbers and head off into the sunset with an awesome paper.

At the end of this process, we’ve taken a 250 (time points) * 256 (electrodes) * 2 (number of conditions) matrix and reduced that to a 24 (temporal-spatial factors)  * 2 (conditions) solution. Applying a variance accounted for criterion to this set (say 10%) and we just have 2 temporal-spatial factor scores * 2 conditions. Something like :

Beautiful innit…  =)

Leave a Reply

Your email address will not be published. Required fields are marked *