An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries

One of the problems solved by analyzing the data of long-term Video EEG monitoring is the differentiation of epileptic and artifact events. For this, not only multichannel EEG signals are used, but also video data analysis, since traditional methods based on the analysis of EEG wavelet spectrograms cannot reliably distinguish an epileptic seizure from a chewing artifact. In this paper, we propose an algorithm for detecting artifact events based on a joint analysis of the level of the optical flow and the ridges of wavelet spectrograms. The preliminary results of the analysis of real clinical data are given. The results show the possibility in principle of reliable distinguishing non-epileptic events from epileptic seizures.


Introduction
The development of post-traumatic epilepsy is one of the most common consequences of traumatic brain injury. Video-electroencephalographic (Video EEG) monitoring is used to confirm epilepsy, control the course of the disease and the effectiveness of the therapy, as well as to diagnose convulsive and non-convulsive seizures. Synchronized video recording of the patient's clinical condition and bioelectric activity of the brain (i.e., EEG) can reliably diagnose epileptic seizures and differentiate them with non-epileptic events. The analysis of publications in periodical literature and monographs in the studied subject area, carried out by the authors, showed that there are very few publications on methods for automatically detecting epileptic seizures by video sequences obtained during video-EEG monitoring. Currently, several methods have been proposed for the automatic detection of seizures from EEG data [1 -5]. In [6,7], the authors proposed an algorithm for automatic detection of seizures based on the analysis of quantitative characteristics of facial expressions in video sequences. In a video sequence using the magnitude of the optical flow, a group of frames with high scene dynamics is detected. The algorithm is developed to detect two types of diagnostic events. The first type of event is observed when patients are in a coma. The second type can be fixed in the form of fading for sev-eral seconds for active patients. The proposed algorithm showed that the detected events quite accurately coincided with the events detected by the analysis of wavelet spectrograms of the EEG channel, proposed in [4] when analyzing Video EEG monitoring data. However, the study of only the data from the video channel does not allow one to distinguish between activity due to the movement of the patient and the activity generated by the seizure.
An important task of analyzing Video EEG data is to differentiate epileptiform activity from chewing artifacts. The method presented in [4] does not allow this.
In [5], a method for finding epileptic seizures and artifacts of chewing in electroencephalographic signals, based on the analysis of their wavelet spectrograms and the parameters of the ridges of wavelet spectrograms, was proposed. It was found that using the frequency maximum value and the arithmetic mean deviation of the frequency of the ridge fragments of the wavelet spectrogram the event can be attributed to an epileptic seizure or to an artifact of chewing. It was shown that at frequencies from 3.5 to 6 Hz of the Fourier spectra of sections of wavelet spectrograms, the spectrum peak frequency for an epileptic seizure is almost three times higher than for the chewing. The half-width of the Fourier spectra of sections of EEG wavelet spectrograms at a cutoff frequency above 3.5 Hz for chewing artifacts is 1.5 -3 times greater than the half-width of an epileptic seizure Fourier spectra. These values are used as features by which one can differentiate an epileptic seizure from a chewing artifact. However, this method cannot distinguish seizures from artifacts associated with patient movement. To increase the reliability of differentiation, it is necessary to conduct a synchronous analysis of video sequences and wavelet spectrograms of the EEG.
In this paper, we propose an algorithm for synchronous analysis of video sequences and EEG signals, based on a combination of previously developed methods described in [4,6,7], which allows differentiating an epileptic seizure from artifacts caused by chewing and moving. The proposed algorithm is capable of detecting two types of diagnostic events in video EEG data taken from patients with brain injury.

Event detection in the video channel of video EEG monitoring data
The algorithm proposed in [6,7] is associated with an analysis of the dynamics of informative areas of interest associated with the patient's face, head, and neck. This study addresses a more general case where an informative area contains a whole image of the patient. It should be noted that the frames of video sequences taken from video EEG monitoring data have the following features. Firstly, an arbitrary aspect angle of the patient's video recording. Secondly, medical equipment may partially occlude the patient. Thirdly, the possibility of the appearance of medical personnel or other patients in the frame. When analyzing video sequences, it is necessary to detect the following events: (a) an epileptic seizure; (b) patient movement (e.g., changing posture, moving around the room); (c) chewing (movement of the face, specific, for example, for the eating).
As a measure of activity J (i) in the region of interest, we will use the total value of the optical flow calculated for each frame of the video sequence [8], where i is the frame number. Since the noise component is present in the function J (i) (see fig. 1), when detecting events, it is necessary to use a smoothed value of the activity index ˆ( ) J i . For smoothing, a discrete version of the Kalman-Bucy filtering algorithm is used [9], since it provides an optimal estimate in the sense of error variance minimum. Each of the diagnostic and artifact events is characterized by a certain range of levels of a smoothed value of the activity measure ˆ( ) J i . The decision on the result of event recognition will be made according to a threshold rule. To exclude false positives of the detector due to shortterm jumps of the optical flow, a decision on the occurrence of an event will be made if the value of ˆ( ) J i exceeds a predetermined threshold in a sequence of at least M frames. Thus, the decision rule will be formulated in the following form: where Event 1 is an indicator of the event; T 1 is the threshold; i 0 is the number of the frame starting from which the inequality holds; M is the length of the sequence of frames required for making a decision about the presence of a diagnostic event. The threshold value is defined as where 0 J is calculated as the mean value of ˆ( ) J i in a fragment of a video sequence with low scene dynamics,  1 is the standard deviation of the value of ˆ( ) J i , k 1 is the coefficient.
Events of another type are manifested in the behavior of active patients in the form of fading for several seconds. In this case, it is proposed to detect events also using the value of the activity measure. In contrast to the case considered above, the appearance of an event corresponds to a minimum of the activity measure. The decision rule takes the following form: where Event 2 is an indicator of the event, and the threshold value is calculated as follows: where  2 is the standard deviation of the value of   J i and k 2 is the coefficient. Thus, the algorithm for recording events in the video channel of video EEG monitoring data includes the following operations.
1. Read frame number i from the video sequence data. It should be noted that a moving artifact with a sufficiently high level of   J i will be detected as a seizure. Therefore, to differentiate diagnostic and artifact events, a synchronous analysis of video record and EEG signals is necessary.

Event detection in EEG signals
In appendix to [10] it was shown that for a signal S (t) = A (t) exp (i  (t)) when the amplitude A(t) > 0 exhibits relatively slow variations compared to the fast vari-ations of the phase Ф (t) and complies with the asymptotic properties [11], the following expressions are valid: 2 ( ) ( , ( )) , Im ( , ( )) ( ) arctan , Re ( , ( )) where W (t, f r (t)) = max |W (t, f (t))| is the ridge of Morlet wavelet transform W, f r is the ridge frequency, t is a time. EEG signals are pre-filtered by a 25 Hz notch filter and a second-order Butterworth filter with a passband from 0.5 to 22 Hz. Detection of specific events in EEG signals is carried out with the help of the ridge wavelet spectrograms power spectral density PDS = |W (t, f r )| 2 analysis [4].
The decision rule for fixing the event is as follows: where Event 3 is the indicator of the event, T 3 is the threshold value of wavelet ridge PSD, which can be find from PSD in time intervals without events. Epileptic seizures and a myographic artifact of chewing are characterized by a comparatively high level of PSD. Therefore, to increase the accuracy of detection of diagnostic events, a synchronous analysis of the video channel is required.

Event detection using the synchronous analysis of video-EEG monitoring data
Each of the diagnostic and artifact events is characterized by a certain range of levels of the smoothed value of the activity measure   J i obtained from the video channel and the power spectral density PSD of the ridge points. In this case, the decision rules can be arranged in Table 1 according to the values of the indicators Event j , j = 1, 2, 3 obtained from (1 -6) during the synchronous analysis of video-EEG monitoring data. In the next section, we present the results of the experiment conducted to test the proposed algorithm.

Computational experiment
To confirm the effectiveness of the developed algorithm, a computational experiment was conducted using the data of video-EEG monitoring obtained in clinical conditions. The developed algorithm is implemented in the MatLab software environment. The optical flow we used as a measure of the patient's activity J (i) is calculat-ed by the Lucas-Kanade algorithm [8]. This algorithm is chosen from the condition of the highest performance in comparison with other techniques. The value of the smoothed activity measure   J i is determined using the discrete version of the Kalman-Bucy filtering algorithm [9]. The values of the parameters of the filtering algorithm are selected based on the analysis of test video sequences providing the best error-speed ratio. The values of J (i) and   J i are normalized by the area of region of interest. In the experiment, we applied the detection algorithm to long-term video EEG records of five patients. The video channel data were analyzed together with the data from three EEG channels selected at the preliminary analysis stage. We analyzed 43 events. Ten of them correspond to epileptic seizures, thirteen are associated with food intake, and twenty -with the patient's moving.
The following results are obtained. Depending on the selected EEG channel, from 35 to 37 events were correctly detected, which amounted from 81.4 to 86 percent. At the same time, seizures were correctly localized from 8 to 10 times, chewing artifacts -from 11 to 13, and events caused by the movement of patients -from 14 to 16 times. Fig. 1 shows graphs of a normalized measure of activity, a normalized smoothed measure, and indicators of events Event 1 and Event 2 for a fragment of a video on which an epileptic seizure is recorded. Fig. 2 shows the projection of the ridge of the EEG wavelet spectrogram in the time-power spectral density axes corresponding to the same fragment of the video record. The grey on the graph indicates the intervals at which the Event 3 indicator takes the value 1. From fig. 1 and fig. 2 it follows that an epileptic seizure is reliably detected in the video record and the wavelet spectrogram of the EEG signal according to the rule presented in Table 1. Fig. 3 and fig. 4 show the results of the analysis of the video channel and the T6-O2 channel of the EEG for a fragment of the video EEG data, on which the patient's food intake is recorded. In the video channel, the Event 1 indicator on the whole fragment takes zero value, and the Event 2 indicator takes the value 1 in the interval between 32 and 40 seconds. At the same time, the Event 3 indicator takes a value of 1 between 0 and 8 seconds, as well as at several intervals after 120 seconds. In this case, following Table 1, the chewing artifact is fixed at intervals where the Event 3 indicator is zero.

Conclusions
As a part of the development of technology for detecting epileptic seizures and differentiating epileptic and artifact events according to video EEG monitoring data, an algorithm for automatic detection and recognition of events is proposed. The algorithm is based on the analysis of the quantitative characteristics of video frames and EEG wavelet spectrograms. The analysis of video sequences is focused on identifying a group of frames with high and low scene dynamics according to a measure calculated as the magnitude of the optical flow. The preliminary results of the analysis of real clinical data are presented. The results of the analysis showed the efficiency of the proposed algorithm for differentiating epileptic seizures from moving and chewing. Further research will be aimed at combining EEG channels and applying patient tracking techniques like [12] for improving the reliability of the proposed algorithm.  Fig. 3