1m ago

20 Views

0 Downloads

953.34 KB

10 Pages

Transcription

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.1Real-Time Brain Oscillation Detection andPhase-Locked Stimulation Using AutoregressiveSpectral Estimation and Time-Series ForwardPredictionL. Leon Chen, Radhika Madhavan, Benjamin I. Rapoport, Student Member, IEEE, and William S. Anderson*Abstract—Neural oscillations are important features in a working central nervous system, facilitating efficient communicationacross large networks of neurons. They are implicated in adiverse range of processes such as synchronization and synapticplasticity, and can be seen in a variety of cognitive processes.For example, hippocampal theta oscillations are thought to be acrucial component of memory encoding and retrieval. To betterstudy the role of these oscillations in various cognitive processes,and to be able to build clinical applications around them,accurate and precise estimations of the instantaneous frequencyand phase are required. Here, we present methodology basedon autoregressive modeling to accomplish this in real time. Thisallows the targeting of stimulation to a specific phase of a detectedoscillation. We first assess performance of the algorithm on twosignals where the exact phase and frequency are known. Then,using intracranial EEG recorded from two patients performing aSternberg memory task, we characterize our algorithm’s phaselocking performance on physiologic theta oscillations: optimizingalgorithm parameters on the first patient using a genetic algorithm, we carried out cross-validation procedures on subsequenttrials and electrodes within the same patient, as well as on datarecorded from the second patient.Index Terms—intracranial EEG, neural oscillations, thetarhythm, closed-loop stimulation, phase-locking, real time, autoregressive model, genetic algorithm.I. I NTRODUCTIONNeural oscillations are fundamental to the normal functioning of a working central nervous system. They can beobserved in single neurons as rhythmic changes of eitherthe subthreshold membrane potential or in cellular spikingbehavior. Large populations of such neurons can give rise tosynchronous activity, which may correspond to rhythmic oscillations in the local field potential (LFP). These oscillations canin turn modulate the excitability of other individual neurons.Therefore, a key function of these oscillations is to facilitateefficient communication across large neuronal networks, asthe synchronous excitation of groups of neurons allow themL. L. Chen, R. Madhavan, and W. S. Anderson* are with the Departmentof Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School,Boston, MA 02115 USA *email: [email protected] I. Rapoport is with the Department of Electrical Engineering andComputer Science, Massachusetts Institute of Technology, Cambridge, MA02139 USA.Copyright (c) 2010 IEEE. Personal use of this material is permitted.However, permission to use this material for any other purposes must beobtained from the IEEE by sending an email to [email protected] form functional networks [1]. Additionally, network oscillations bias input selection, temporally link neurons intoassemblies, and facilitate synaptic plasticity, mechanisms thatall support the long-term consolidation of information [2].There are distinct oscillators in various brain regions thatare governed by different physiological mechanisms. We areonly beginning to uncover the various roles these oscillatorsplay in different aspects of cognition. Numerous EEG, MEG,ECoG, and single unit recording studies have shown thatoscillations at certain frequencies can be elicited or modulatedby specific task demands, and that their amplitude or powerhave correlations to the outcome of those tasks [3], [4]. Forexample, prominent oscillations in the theta frequency rangecan be detected in the hippocampus and entorhinal cortex ofrats during locomotion, orienting, conditioning, or while theyare performing learning or memory tasks [5], as well as inhumans performing various memory and spatial navigationtasks [6], [7], [8], [9], [10], [11], [12], [13], [14]. Becauseof the role of hippocampal theta oscillations in modulatinglong-term potentiation (LTP), they are thought to be an important component of memory encoding [15], [16], [17], [18],[19], [20]. Synchronization and coherence of theta oscillationsbetween the hippocampus and other brain regions such as theprefrontal cortex have also been shown to be an importantfactor in successful learning and memory [21], [22], [23], [24].The phase of these neural oscillations can possibly be usedto store and carry information [25], [2], as well as to modulatephysiological activity such as LTP. For example, stimulationapplied to the perforant pathway at the peak of hippocampaltheta rhythms induced LTP while stimulation applied at thetrough induced long-term depression [17]. Theta also servesto temporally organize the firing activity of single neuronsinvolved in memory encoding [26], [27], such that the degreeto which single spikes are phase-locked to the theta-frequencyfield oscillations is predictive of how well the correspondingmemory item is transferred to long-term memory [14]. Suchtemporal patterns of neural activity are potentially importantconsiderations in the design of future neural interface systems.Phase relationships are typically characterized through posthoc analysis in most studies, as accurate measures of frequency and phase and their complex relationships with otherphenomena require analysis in the time-frequency domain.Real-time systems that could potentially utilize oscillationphase information, for example brain-computer interfaces [28],Copyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing [email protected]

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.2II. M ETHODSA. Algorithm OverviewThe ultimate goal of our algorithm is to be able to calculatethe instantaneous frequency and phase of a neurophysiological signal at a specific point in time with the necessaryaccuracy and precision to be able to deliver phase-lockedstimulation pulses in real time. The algorithm is comprisedof several sequential steps: 1. Frequency band optimizationwithin a predefined frequency band (for theta, we use 49 Hz), utilizing autoregressive spectral estimation, 2. Zerophase bandpass filtering, based on the results of the frequencyband optimization procedure, 3. Estimating the future signalby autoregressive time-series prediction, 4. Calculating theinstantaneous frequency and phase via the Hilbert-transformanalytic signal, and 5. Calculating the time lag until thedesired phase for the output stimulation pulse. A graphicalrepresentation of these steps is depicted in Fig. 1. It mustbe noted, however, that there are several parameters used bythe algorithm that are optimized offline prior to its onlineiEEG signalat0bt0 1t0autoregressive spectral estimationPowercoptimize frequency passband02468101214161820zero phase bandpass filterautoregressive forward predictiondt0 1 tstarttstopt0calculate instantaneous f and φe1801005 180fφor responsive closed-loop stimulator devices that combineneural ensemble decoding with simultaneous electrical stimulation feedback [29], [30], [31], would require precise andaccurate measurements of the instantaneous phase. Phasespecific stimulation could also aid in experimental researchon the temporal patterns of neural ensemble activity andtheir correlations with cognitive processes and behavior. Afew studies have performed such phase-specific electricalstimulation on animals. Pavlides et al. [15] and Hölscher et al.[16] built analog circuits that triggered stimulation pulses atthe peak, zero-crossing, and troughs of the hippocampal LFPsignal. This approach assumes a sufficiently narrow bandwidthsuch that the peak, zero-crossing, and troughs of the inputsignal approximates these values of the actual underlyingoscillation. Hyman et al. used a dual-window discriminationmethod for peak detection, whereby two windows of variabletime widths and heights were manually created to fit eachindividual animals typical theta frequency and amplitude, andthe stimulator set to be delay-triggered if the input waveformsuccessfully passed through both windows [17]. Because thisapproach requires manual calibration to a specific setting, realtime operation in the face of dynamic amplitude or frequencychanges would not be possible. These systems would notbe sufficient for neural interfaces operating in real time orexperiments requiring higher-resolution phase detection. Assuch, an alternative approach is needed.Here, we present methods to accurately estimate the instantaneous frequency and phase of an intracranial EEG oscillationsignal in real time. At the core of our methodology is anautoregressive model of the EEG signal, which we use toboth optimize the bandwidth of the narrow-band signal usingestimations of the power spectral density, as well as toperform time-series forward-predictions. These two steps inconjunction allows us to make precise and accurate estimatesof the instantaneous frequency and phase of an oscillation,which we then use to target output stimulation pulses to aspecific phase of the oscillation.0t0t0calculate time delay for stimulationft0 1t0Fig. 1: Overview of algorithm. (a) Raw iEEG signal, wheret0 represents the current time in a real-time acquisitionprocess. (b) Analyze the last 1-second segment of iEEGsignal. (c) Use autoregressive spectral estimation to calculatethe power spectral density in the 1-second segment. Thefrequency band optimization procedure is carried out. (d) The1-second segment is bandpass filtered in both the forwardand backward directions, based on the optimized passband.(e) Using the bandpass-filtered signal from tstart to tstop ,time-series forward predictions (shown in red) are made usingthe autoregressive model. (f) The instantaneous phase andfrequency of this forward-predicted segment are calculated. (g)Using the instantaneous phase and frequency of the forwardpredicted segment at t0 , a time delay from t0 is calculated.Output stimulation is triggered after this time delay (shownin red). Overlaid is the raw iEEG signal from (b) plus someadditional time.operation. The procedure for the offline optimization of theseparameters using a genetic algorithm is discussed in section2I.B. Autoregressive ModelAutoregressive (AR) modeling has been successfully applied to EEG signal analysis for diverse applications such asdata compression, segmentation, classification, sharp transientdetection, and rejection or canceling of artifacts [32], [33].Although the processes underlying EEG signals may be nonlinear, traditional linear AR modeling has been shown to be asgood as, or even slightly better than, non-linear models in atCopyright (c) 2011 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing [email protected]

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.3least one study [34], in terms of the correlation coefficientbetween forecasted and real time series. It was found thatprocesses with spectra limited to certain frequencies plus whitenoise can be well described by an AR model, while processeswith power spectra characterized by multiple very narrowpeaks are described poorly by an AR model. Therefore, forbrain oscillation detection, AR modeling is a natural choice.Furthermore, parameters can be updated in an adaptive mannerusing the Kalman filtering algorithm or chosen to give the bestfit to a segment of data samples, using either the LevinsonDurbin algorithm or the Burg algorithm [33]. Particularly withstrong, coherent oscillations, small segments of the EEG arepresumed to be locally-stationary, and thus a non-adaptivemodel is suitable. Autoregressive modeling provides a robustmethod of estimating the power spectrum for short (1–2 s)EEG segments, and is less susceptible to spurious results [33].An autoregressive model AR(p) of order p is a randomprocess defined as:Xt c pXαk Xt k εt ,(1)k 1where α1 , ., αp are the parameters of the model, c is aconstant, and εt is white noise. If a is the vector of modelparameters, for a given time-series sequence x(t) and modeloutput x̂(t, a), the forward prediction error is given by:e(t, a) x(t) x̂(t, a),(2)where a is found by minimizing the mean squared errorE(a) N1 X 2e (t, a),N t 1(3)where N is the segment length of x(t). The autoregressivemodel can be constructed using one of several algorithms tocalculate model coefficients. They include the least-squaresapproach, which minimizes the prediction error in the leastsquares sense (either forward prediction error or both forwardand backward prediction errors), the Burg lattice method,which solves the lattice filter equations using the mean (eitherharmonic or geometric) of forward and backward squaredprediction errors, and the Yule-Walker method, which solvesthe Yule-Walker equations formed from sample covariances,minimizing the forward prediction error. The Burg and YuleWalker methods always produce stable models, but becausewe are only interested in forward prediction, we use the YuleWalker method here.One issue that is of critical importance in the successfulapplication of AR modeling is the selection of the model order[33], [32], [35], [36], [37], [38]. There have been many criteriaformulated over the years for determining the optimal modelorder. The most well-known of these is Akaike’s InformationCriterion (AIC) [39]. Other criteria that have been developed,such as the Bayesian Information Criterion, Final PredictionError, Minimal Descr