Wallerius J., Trejo L.J., Matthews R., Rosipal R., and Caldwell J.A.
Robust feature extraction and classification of EEG spectra for real-time classification of cognitive state
Proceedings of 11th International Conference on Human Computer Interaction, Las Vegas, NV, 2005.
We developed an algorithm to extract and combine EEG spectral features, which effectively classifies cognitive states and is robust in the presence of sensor noise. The algorithm uses a partial-least squares (PLS) algorithm to decompose multi-sensor EEG spectra into a small set of components. These components are chosen such that they are linearly orthogonal to each other and maximize the covariance between the EEG input variables and discrete output variables, such as different cognitive states. A second stage of the algorithm uses robust cross-validation methods to select the optimal number of components for classification. The algorithm can process practically unlimited input channels and spectral resolutions. No a priori information about the spatial or spectral distributions of the sources is required. A final stage of the algorithm uses robust cross-validation methods to reduce the set of electrodes to the minimum set that does not sacrifice classification accuracy. We tested the algorithm with simulated EEG data in which mental fatigue was represented by increases frontal theta and occipital alpha band power. We synthesized EEG from bilateral pairs of frontal theta sources and occipital alpha sources generated by second-order autoregressive processes. We then excited the sources with white noise and mixed the source signals into a 19- channel sensor array (10-20 system) with the three-sphere head model of the BESA Dipole Simulator. We generated synthetic EEG for 60 2-second long epochs. Separate EEG series represented the alert and fatigued states, between which alpha and theta amplitudes differed on average by a factor of two. We then corrupted the data with broadband white noise to yield signal-to-noise ratios (SNR) between 10 dB and -15 dB. We used half of the segments for training and cross-validation of the classifier and the other half for testing. Over this range of SNRs, classifier performance degraded smoothly, with test proportions correct (TPC) of 94%, 95%, 96%, 97%, 84%, and 53% for SNRs of 10 dB, 5 dB, 0 dB, -5 dB, -10 dB, and -15 dB, respectively. We will discuss the practical implications of this algorithm for real-time state classification and an off-line application to EEG data taken from pilots who performed cognitive and flight tests over a 37-hour period of extended wakefulness.