ARTIFICIAL INTELLIGENCE-BASED NEUROFEEDBACK

A new approach to the design of neurofeedback systems based on using Artiﬁcial Intelligence (AI) tools is proposed. The concept of control models of biological neural networks, and the set-up including equipment and software tools developed in IPME RAS in order to implement the proposed concept is described. as well as the AI methods and programs proposed for use.


Introduction
In recent years, there has been a rapid growth in the application of methods and means of artificial intelligence (AI). For a number of problems, the results obtained in the AI paradigm are comparable or even superior to those obtained by humans [Silver et al., 2017]. The tasks of exploring the possibilities and improving the efficiency of interaction between natural and artificial intelligence are on the agenda currently. The first steps in this direction were made long before the recent boom of artificial intelligence. They gave rise to the concepts of biofeedback and neurofeedback [Kropotov, 2008;Sitaram et al., 2017]. In systems with biofeedback information about the state of the body (information about the state of the nervous system, in particular, the cerebral cortex) is transmitted to the computer using special tools for further processing and classifica-tion. The results of the classification are shown to the person in such a way that he can assess the proximity of his state to a certain area that corresponds to desirable or, conversely, undesirable states. As a means of collecting information, non-invasive means are usually used: EEG, MEG, fMRI, etc.
There are many publications reporting practical applications of the approach. In the work [Ovod et al., 2012] the authors proposed to form a neurofeedback signal on the basis of an adaptive model of brain rhythms, adjusted by EEG signals using methods developed in cybernetics. Adaptive formation of neuro-feedback signals allows not only to take into account the change in the state of the subject persons, but also to adjust the recommendations of the system when the state of the subject person approaches or moves away from the target areas. The adaptive work model used in [Ovod et al., 2012], like other adaptive models used to describe dynamics and control in neural network models [Plotnikov et al., 2017;Gorshkov et al., 2017], is highly aggregated and contains a relatively small number of parameters. The abilities of such models to describe or predict complex signals are relatively small. At the same time to increase the informativeness of EEG data the number of leads and the duration of samples have to be increased.
A matter of interest is the use of more complex adaptive signal models based on deep neural networks and other models and methods of modern artificial intelligence. Until now, researches on the construction of neuro-feedback systems based on AI in the world are almost absent. The use of neuro-feedback communication on the basis of AI methods and tools can provide an opportunity to improve significantly the efficiency of interaction between natural and artificial intelligence. In addition, this will provide new opportunities for improving brain-computer interfaces (BCI). This paper describes a new approach to the construction of neuro-feedback systems based on AI. The following sections describe the concept of control models of biological neural networks, developed in IPME RAS, a set of equipment and a set of software tools created to implement the proposed concept, as well as the AI methods and programs proposed for use.

Neurofeedback
The goal of clinical and behavioral neuroscience is to observe and understand the mechanisms of the nervous system to control behavioral neural processes and restore these functions if they are disturbed. To solve this problem, a neurofeedback approach can be used, that is a psychophysiological procedure in which subjects are provided with models of neural activity with the goal of regulating them online [Sitaram et al., 2017].
The best way to register the activity of neurons is to use invasive methods. Depending on the size and of the impendance of the electrodes it is possible to register an activity of neuron groups or even separate neurons. These methods allow disabled people (epilepsy, Parkinson's disease, essential tremor) to control exoskeleton [Takasaki et al., 2018] or to type the text [Arvaneh et al., 2018]. However, these methods have several problems. The first problem is the expensive cost of this procedure. Secondly, electrode implantation into the brain cortex is a complex neurosurgical operation leading to the damage of neural tissues around the implant. Thirdly, implant fouls with a glia during the time, which leads to the problems with a signal recording.
Another way to register the brain activity is to use noninvasive methods. The most common way is an electroencephalography (EEG), i.e. the recording of brain activity using the electrodes which are placed along the scalp. This is the most widespread and cheap method and has the highest time resolution of noninvasive methods. As the disadvantages of this method one can mention a low spatial resolution and noisy signal. EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain [Niedermeyer and Lopes da Silva, 2005]. Various patterns of electrical activity, known as brain waves, can be recognized by their amplitudes and frequencies. The frequency shows how quickly the waves oscillate, which is measured by the number of waves per second (Hz), and the amplitude represents the power of these waves, measured using microvolts [Marzbani et al., 2016].
The various frequency components are divided into delta (less than 4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz) and gamma (30-100 Hz) rhythms, where each one represents a specific physiological function. Delta and theta rhythms are connected with a sleep state of a person. Alpha rhythm is represented mainly in the occipital areas. Its amplitude significantly increases while the eyes are closed; it is also suppressed with mental stress and increases with relaxation. This rhythm is produced when arousal circulates between the cortex and the thalamus. Beta and gamma rhythms increase with a mental activity and alert and have lower amplitude compared to the alpha rhythm.
Our goal is to develop a complex which allows a person to control a vehicle using a neurofeedback paradigm, i.e. to control it by changing his brain activity. Delta and theta rhythms are not suitable to solve this problem because they can be observed when the person is asleep. Beta and gamma rhythms have low amplitude compared to the noise, therefore it is complex to use them for controlling. The most appropriate rhythm to solve the posed problem is alpha rhythm. However, a person can not control the vehicle while his eyes are closed. For this purpose one can use so called sensorimotor rhythm (mu rhythm), which is the EEG rhythm in the range of alpha rhythm, observed in the central and centrotalporal regions of the cerebral cortex in a relaxed state. The mu rhythm is blocked by the movement, observation of movement, kinesthetic or visual representation of movement [Neuper et al., 2005].

Experimental Setup
The development of an experimental setup is a key stage of the brain-computer interface (BCI) design. Besides the software implementation of mathematical algorithms, the experimental setup should have a highquality hardware. In our case, we have used the following components: a wireless electroencephalograph Mitsar-EEG-SmartBCI by Mitsar Company (32 analog channels) [Mitsar], a mobile robot with the programmable controller by TRIK Company [TRIK] and a PC with the necessary software that has been developed by our group. The practical implementation and structure of the experimental setup are shown in Fig. 1 and 2. The software has been developed in the MATLAB environment and it consists of five basic units: EEG signal acquisition, frequency filtering, spatial filtering, adaptive estimation of the model parameters and robot connection (Fig. 2).
The EEG signal acquisition unit is designed to transfer electrical potentials from the encephalograph electrodes to the PC. The received EEG signal is transmitted to the frequency filtering unit. This unit contains the Chebyshev bandpass filter (type I) which allows us to increase the signal-to-noise ratio, as well as to select the required frequency range in the EEG signal spectrum [Smetanin et al., 2018a;Smetanin et al., 2018b]. The spatial filtering unit is based on the CSP (common spatial pattern) algorithm and it is applied to separate EEG signals that correspond to different states of the subject [Takasaki et al., 2018]. For example, let X 1 and X 2 are time series (with sizes T 1 and T 2 respectively) which correspond to two states of the subject. In our study, these states are right and left hand movements. Then the equations of CSP algorithm can be presented in the following form: where is n a number of the EEG channels; I n is an identity matrix n × n; λ is a gain. From the solution of the matrix equation C 1 v = αC 2 v, we could find the eigenvalues α and eigenvectors v. The eigenvector that corresponds to the largest eigenvalue is taken as the CSP filter coefficients. The next unit is called the adaptive estimation of the model parameters. This unit is estimated the model parameters on the basis of the incoming signal. In our investigation, we use a vector autoregressive model (VAR) as the adaptive mathematical model [Ovod et al., 2012]. After that, the control signal is generated by comparing the VAR model and the results of the CSP algorithm in the control generation unit. The obtain control signal is transmitted to the controlled robot, which in turn starts moving in either the left or right direction (according to the subject's current state). Some units of the developed software have the certificate of state registration of software [Stepanenko et al., 2019].

Results and Discussion
The sensorimotor rhythm recognition experiment was performed on a test trained subject (male, 28 years old). The spatial filter was trained through rotation by the wrist of the left or right hands in turn, recognition through rotation and imagination of rotation. The results are consistent with the observations of Neuper [Neuper et al., 2005], namely the movement is better recognized than his imagination. In our case: rotation of the right hand: recognition is 61%; imagination of rotation of the right hand: recognition is 52%; rotation with the left brush: recognition is 82%; imagination of rotation of the left hand: recognition is 68%.
The accuracy of the motion recongnition is not enough to solve the posed problem of controlling robot. The accuracy significantly depends on the subject, whether it trained to control his brain activity or not. The other problem is that movement of the eyes and facial muscles affect the recorded signal. As a result, there is a real opportunity to learn how to control the vehicle using muscles and eyes, and not the brain. To overcome these difficulties we propose to use modern methods of pattern recognition based on an artificial intelligence.

Artificial Intelligence
Artificial Intelligence (AI) and machine learning (ML) technology have been developing rapidly in recent years. Traditional machine learning approach consists of two steps: feature engineering and model fitting/training. Feature engineering is the process of selecting and transforming the key information contained in the raw data. Obtained features are served as the input for machine learning model. The model consists of several parameters which are tuned ('trained') typically using some iterative process so that given goal is achieved. For example one might consider training a model which maps input features to a fixed set of labels. This task is called classification. In the case of our setup these labels might be 'right hand rotation' and 'left hand rotation'.
Feature engineering requires strong domain knowledge and high quality raw data. EEG data has a low SNR (signal-to-noise ratio) making it challenging to extract reasonable features. Although several preprocessing methods have been developed to decrease the noise level, these methods are time-consuming and not suitable for online control design and may cause useful information loss. Besides that EEG signals are affected by eyes movement and facial muscles which might be reflected in extracted features.
During past years the alternative approach for machine learning gains popularity. This approach which is called deep learning instead of splitting feature engineering and learning into two separate stages trains features and classification model at the same time. In other words deep The key point of our proposal is extension of the adaptive model and adaptive parameter estimation in the scheme of Fig.2 to the AI model and machine learning classification algorithms, see Fig.3. Deep learning paradigm seems the most promising for this classification task because of feature engineering challenges that were discussed above. One of the issues with deep learning approach is a need for big volumes of raw data. To address this problem we propose to use data from public sources [EEG-datasets; EEG/ERP data] to pretrain our model [Erhan D., et al., 2010]. This model will serve as the basis for classification models which will be trained on our labeled data. We could start from binary classification ('right hand rotation', 'left hand rotation') and add new classes once we collect more data without need to retrain the base model.
There are many architectures of deep learning models. So called Convolutional Neural Networks (CNN) allow to capture spatial structure of the data while Recurrent Neural Networks (RNN) are suitable well for capturing temporal structure. Since EEG signals have both spatial and temporal components one might consider using hybrid models consisting of CNN and RNN layers.
Although deep learning models extract features by themselves and can be applied directly to the raw data [Sors et al., 2018;Tang et al., 2017], one can still use preprocessed signal as model input [Johansen et al., 2016;Tsiouris et al., 2018]. Deep learning models also often used as feature extractors [Ansari et al., 2018;Wang et al., 2017]. Our intent is to try all these main approaches and compare their performance for the task of vehicle control.

Conclusion
The modern AI-based technology seems suitable for processing big datasets of EEG data. We propose to use it in the neurofeedback environment. It opens the way for improvement of the interaction between human and machine and for extension of the area of the neurofeedback applications. The next stage of the research will be the experimental study of the proposed approach.