Now showing 1 - 10 of 15
  • Publication
    Metadata only
    EEG-based emotion charting for Parkinson's disease patients using Convolutional recurrent neural networks and cross dataset learning
    (Elsevier, 2022)
    Muhammad Najam Dar
    ;
    Muhammad Usman Akram
    ;
    ;
    Khawaja Sajid Gul
    ;
    Murugappan M.
    Electroencephalogram (EEG) based emotion classification reflects the actual and intrinsic emotional state, resulting in more reliable, natural, and meaningful human-computer interaction with applications in entertainment consumption behavior, interactive brain-computer interface, and monitoring of psychological health of patients in the domain of e-healthcare. Challenges of EEG-based emotion recognition in real-world applications are variations among experimental settings and cognitive health conditions. Parkinson's Disease (PD) is the second most common neurodegenerative disorder, resulting in impaired recognition and expression of emotions. The deficit of emotional expression poses challenges for the healthcare services provided to PD patients. This study proposes 1D-CRNN-ELM architecture, which combines one-dimensional Convolutional Recurrent Neural Network (1D-CRNN) with an Extreme Learning Machine (ELM), robust for the emotion detection of PD patients, also available for cross dataset learning with various emotions and experimental settings. In the proposed framework, after EEG preprocessing, the trained CRNN can use as a feature extractor with ELM as the classifier, and again this trained CRNN can be used for learning of new emotions set with fine-tuning of other datasets. This paper also applied cross dataset learning of emotions by training with PD patients datasets and fine-tuning with publicly available datasets of AMIGOS and SEED-IV, and vice versa. Random splitting of train and test data with 80 − 20 ratio resulted in an accuracy of 97.75% for AMIGOS, 83.20% for PD, and 86.00% for HC with six basic emotion classes. Fine-tuning of trained architecture with four emotions of the SEED-IV dataset results in 92.5% accuracy. To validate the generalization of our results, leave one subject (patient) out cross-validation is also incorporated with mean accuracies of 95.84% for AMIGOS, 75.09% for PD, 77.85% for HC, and 84.97% for SEED-IV is achieved. Only a 1 − sec segment of EEG signal from 14 channels is enough to detect emotions with this performance. The proposed method outperforms state-of-the-art studies to classify EEG-based emotions with publicly available datasets, provide cross dataset learning, and validate the robustness of the deep learning framework for real-world application of psychological healthcare monitoring of Parkinson's disease patients.
    WOS© Citations 15Scopus© Citations 41  81
  • Publication
    Metadata only
    Abnormal EEG detection using time-frequency images and convolutional neural network
    (Springer, 2023) ;
    Rishabh Bajpai
    ;
    Prince, A. Amalin
    ;
    Murugappan, M.
    In the process of diagnosing neurological disorders, neurologists often study the brain activity of the patient recorded in the form of an electroencephalogram (EEG). Identifying an abnormal EEG serves as a preliminary indicator before specialized testing to determine the neurological disorder. Traditional identification methods involve manual perusal of the EEG signals. This method is relatively slow and tedious, requires trained neurologists, and delays the treatment plan. Therefore, the development of an automated abnormal EEG detection system is essential. In this study, we propose a method based on short-time Fourier transform (STFT), which is a time-frequency (TF) representation, and deep convolutional neural network (CNN) to detect abnormal EEGs. First, the filtered time-series EEG signals are converted into TF images by applying STFT. Then, the images are fed to three popular configurable CNN structures, namely, DenseNet, SeizureNet, and Inception-ResNet-V2, to extract deep learned features. Finally, an extreme learning machine (ELM)-based classifier detects the input TF images. The proposed STFT-based CNN method is evaluated using the Temple University Hospital (TUH) abnormal EEG corpus, which is available under the public domain. The experiment showed that the combination of the SeizureNet-ELM model achieved an average (fivefold cross-validation) accuracy, specificity, sensitivity, and F1-score of 85.87%, 88.43%, 83.23%, and 0.858, respectively. The results demonstrate that the proposed framework may aid clinicians in abnormal EEG detection for the early treatment plan.
    Scopus© Citations 3  131
  • Publication
    Open Access
    Comprehensive analysis of feature extraction methods for emotion recognition from multichannel EEG recordings
    (MDPI, 2023) ;
    Thangavel, Prasanth
    ;
    Thomas, John
    ;
    Fogarty, Jack S.
    ;
    Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
    WOS© Citations 5Scopus© Citations 26  102  194
  • Publication
    Metadata only
    Automated recognition of teacher and student activities in the classroom environment: A deep learning framework
    (IEEE, 2024) ;
    Prince, A. Amalin
    ;
    Murugappan, M.
    Teacher and student behavior during class is often observed by education professionals to evaluate and develop a teacher’s skill, adapt lesson plans, or monitor and regulate student learning and other activities. Traditional methods rely on accurate manual techniques involving in-person field observations, questionnaires, or the subjective annotation of video recordings. These techniques are time-consuming and typically demand observation and coding by a trained professional. Thus, developing automated tools for detecting classroom behaviors using artificial intelligence could greatly reduce the resources needed to monitor teacher and student behaviors for research, practice, or professional development purposes. This paper presents an automated framework using a deep learning approach to recognize classroom activities encompassing both student and teacher behaviors from classroom videos. The proposed method utilizes a long-term recurrent convolutional network (LRCN), which captures the spatiotemporal features from the video frames. For evaluation purposes, experiments were carried out on a subset of EduNet and an independent dataset composed of classroom videos collected from the internet. The proposed LRCN system achieved a maximum average accuracy (ACC) of 93.17%, precision (PRE) of 94.21%, recall (REC) of 91.76%, and F1-Score (F1-S) of 92.60% on the EduNet dataset when estimated by 5-fold cross-validation. The system provides ACC =83.33%, PRE =89.25%, REC =83.32%, and F1-S =82.14% when applied to independent testing, which ensures reliability. The study has significant methodological implications for the automated recognition of classroom activities and may assist in providing information about classroom behaviors that are worthy of inclusion in the evaluation of education quality.
      50
  • Publication
    Open Access
    Automated classification of student’s emotion through facial expressions using transfer learning
    (The International Academic Forum, 2023) ;
    Ratnavel Rajalakshmi
    ;
    Venkata Dhanvanth
    ;
    Fogarty, Jack S.
    Emotions play a critical role in learning. Having a good understanding of student emotions during class is important for students and teachers to improve their teaching and learning experiences. For instance, analyzing students’ emotions during learning can provide teachers with feedback regarding student engagement, enabling teachers to make pedagogical decisions to enhance student learning. This information may also provide students with valuable feedback for improved emotion regulation in learning contexts. In practice, it is not easy for teachers to monitor all students while teaching. In this paper, we propose an automated framework for emotional classification through students’ facial expression and recognizing academic affective states, including amusement, anger, boredom, confusion, engagement, interest, relief, sadness, and surprise. The methodology includes dataset construction, pre-processing, and deep convolutional neural network (CNN) framework based on VGG-19 (pre-trained and configured) as a feature extractor and multi-layer perceptron (MLP) as a classier. To evaluate the performance, we created a dataset of the aforementioned facial expressions from three publicly available datasets that link academic emotions: DAiSEE, Raf-DB, and EmotioNet, as well as classroom videos from the internet. The configured VGG-19 CNN system yields a mean classification accuracy, sensitivity, and specificity of 82.73% ± 2.26, 82.55% ± 2.14, and 97.67% ± 0.45, respectively when estimated by 5-fold cross validation. The result shows that the proposed framework can effectively classify student emotions in class and may provide a useful tool to assist teachers understand the emotional climate in their class, thus enabling them to make more informed pedagogical decisions to improve student learning experiences.
      31  228
  • Publication
    Metadata only
    Biomedical signals based computer-aided diagnosis for neurological disorders
    (Springer, 2022)
    Murugappan, M.
    ;
    Biomedical signals provide unprecedented insight into abnormal or anomalous neurological conditions. The computer-aided diagnosis (CAD) system plays a key role in detecting neurological abnormalities and improving diagnosis and treatment consistency in medicine. This book covers different aspects of biomedical signals-based systems used in the automatic detection/identification of neurological disorders. Several biomedical signals are introduced and analyzed, including electroencephalogram (EEG), electrocardiogram (ECG), heart rate (HR), magnetoencephalogram (MEG), and electromyogram (EMG). It explains the role of the CAD system in processing biomedical signals and the application to neurological disorder diagnosis. The book provides the basics of biomedical signal processing, optimization methods, and machine learning/deep learning techniques used in designing CAD systems for neurological disorders.
    Scopus© Citations 1  118
  • Publication
    Metadata only
    Automated multi-class seizure-type classification system using EEG signals and machine learning algorithms
    (IEEE, 2024)
    Abirami, S.
    ;
    Tikaram
    ;
    Kathiravan, M.
    ;
    ;
    Menon, Ramshekhar N.
    ;
    Thomas, John
    ;
    Karthick, P. A.
    ;
    Prince, A. Amalin
    ;
    Ronickom, Jac Fredo Agastinose
    Epilepsy is a chronic brain disorder characterized by recurrent unprovoked seizures. The treatment for epilepsy is influenced by the types of seizures. Therefore, developing a reliable, explainable, and automated system to identify seizure types is necessary. This study aims to automate the process of classification of five seizure types: focal non-specific, generalized, complex partial, absence, and tonic-clonic using electroencephalogram (EEG) signals and machine learning algorithms. The EEG signals of 2933 seizures from 327 patients were obtained from the publicly available Temple University Hospital dataset. Initially, the signals were preprocessed using a standard pipeline, and 110 features from the time, frequency, and time-frequency domain were computed from each seizure. Further, the features were ranked using the statistical test and extreme Gradient Boosting (XGBoost) algorithm to identify the significant features. We built binary and multiclass seizure-type classification systems using the identified features and machine learning algorithms. Our study revealed that the EEG band power between 11–13 Hz, 27–29 Hz, intrinsic mode function (IMF) band power 19–21 Hz, and delta band (1-4 Hz) played a crucial role in discriminating the seizures. We achieved an average accuracy of 88.21% and 69.43% for the binary and multiclass seizure-type classification, respectively, using the XGBoost classifier. We also found that the combination of features performed well compared to any single domain. This automated system has the potential to aid neurologists in making diagnosis of epileptic seizure types. The proposed methodology can be applied alongside the established clinical approach of visual evaluation for the classification of seizure-types.
      64
  • Publication
    Metadata only
    Improving automated diagnosis of epilepsy from EEGs beyond IEDs
    (IOP Publishing, 2022)
    Thangavel, Prasanth
    ;
    Thomas, John
    ;
    Sinha, Nishant
    ;
    Peh, Wei Yan
    ;
    ;
    Cash, Sydney S.
    ;
    Chaudhari, Rima
    ;
    Karia, Sagar
    ;
    Jin, Jing
    ;
    Rathakrishnan, Rahul
    ;
    Saini, Vinay
    ;
    Nilesh Shah
    ;
    Srivastava, Rohit
    ;
    Tan, Yee-Leng
    ;
    Westover, Brandon
    ;
    Dauwels, Justin
    Objective: Clinical diagnosis of epilepsy relies partially on identifying Interictal Epileptiform Discharges (IEDs) in scalp electroencephalograms (EEGs). This process is expert-biased, tedious, and can delay the diagnosis procedure. Beyond automatically detecting IEDs, there are far fewer studies on automated methods to differentiate epileptic EEGs (potentially without IEDs) from normal EEGs. In addition, the diagnosis of epilepsy based on a single EEG tends to be low. Consequently, there is a strong need for automated systems for EEG interpretation. Traditionally, epilepsy diagnosis relies heavily on IEDs. However, since not all epileptic EEGs exhibit IEDs, it is essential to explore IEDindependent EEG measures for epilepsy diagnosis. The main objective is to develop an automated system for detecting epileptic EEGs, both with or without IEDs. In order to detect epileptic EEGs without IEDs, it is crucial to include EEG features in the algorithm that are not directly related to IEDs.

    Approach: In this study, we explore the background characteristics of interictal EEG for automated and more reliable diagnosis of epilepsy. Specifically, we investigate features based on univariate temporal measures (UTM), spectral, wavelet, Stockwell, connectivity, and graph metrics of EEGs, besides patient-related information (age and vigilance state). The evaluation is performed on a sizeable cohort of routine scalp EEGs (685 epileptic EEGs and 1229 normal EEGs) from five centers across Singapore, USA, and India. Main results: In comparison with the current literature, we obtained an improved Leave-One-Subject-Out (LOSO) cross-validation (CV) area under the curve (AUC) of 0.871 (Balanced Accuracy (BAC) of 80.9%) with a combination of 3 features (IED rate, and Daubechies and Morlet wavelets) for the classification of EEGs with IEDs vs. normal EEGs. The IED-independent feature UTM achieved a LOSO CV AUC of 0.809 (BAC of 74.4%). The inclusion of IED-independent features also helps to improve the EEG-level classification of epileptic EEGs with and without IEDs vs. normal EEGs, achieving an AUC of 0.822 (BAC of 77.6%) compared to 0.688 (BAC of 59.6%) for classification only based on the IED rate. Specifically, the addition of IED-independent features improved the BAC by 21% in detecting epileptic EEGs that do not contain IEDs.

    Significance: These results pave the way towards automated detection of epilepsy. We are one of the first to analyse epileptic EEGs without IEDs, thereby opening up an underexplored option in epilepsy diagnosis.
    WOS© Citations 8Scopus© Citations 11  53
  • Publication
    Open Access
    Emotion recognition from spatio-temporal representation of EEG signals via 3D-CNN with ensemble learning techniques
    (MDPI, 2023) ;
    Baranwal, Arapan
    ;
    Prince, A. Amalin
    ;
    Murugappan, M.
    ;
    Javeed Shaikh Mohammed
    The recognition of emotions is one of the most challenging issues in human-computer interaction (HCI). EEG signals are widely adopted as a method for recognizing emotions because of their ease of acquisition, mobility, and convenience. Deep neural networks (DNN) have provided excellent results in motion recognition studies. Most studies, however, use other methods to extract handcrafted features, such as Pearson correlation coefficient (PCC), Principal Component Analysis, Higuchi Fractal Dimension (HFD), etc., even though DNN is capable of generating meaningful features. Furthermore, most earlier studies largely ignored spatial information between the different channels, focusing mainly on time domain and frequency domain representations. This study utilizes a pre-trained 3D-CNN MobileNet model with transfer learning on the spatio-temporal representation of EEG signals to extract features for emotion recognition. In addition to fully connected layers, hybrid models were explored using other decision layers such as multilayer perceptron (MLP), k nearest neighbor (KNN), extreme learning machine (ELM), XGBoost (XGB), random forest (RF), and support vector machine (SVM). Additionally, this study investigates the effects of post processing or filtering output labels. Extensive experiments were conducted on the SJTU Emotion EEG Dataset (SEED) (three classes) and SEED-IV (four classes) datasets, and the results obtained were comparable to the state-of-the-art. Based on the conventional 3D-CNN with ELM classifier, SEED and SEED-IV datasets showed a maximum accuracy of 89.18% and 81.60%, respectively. Post-filtering improved the emotional classification performance in the hybrid 3D-CNN with ELM model for SEED and SEED-IV datasets to 90.85% and 83.71%, respectively. Accordingly, spatial-temporal features extracted from the EEG, along with ensemble classifiers, were found to be the most effective in recognizing emotions compared to state-of the-art methods.
    WOS© Citations 3Scopus© Citations 10  108  207
  • Publication
    Metadata only
    A machine learning framework for classroom EEG recording classification: Unveiling learning-style patterns
    (MDPI, 2024) ;
    Chadha, Shivam
    ;
    Prince, A. Amalin
    ;
    Murugappan, M.
    ;
    Md. Sakib islam
    ;
    Md. Shaheenur Islam Sumon
    ;
    Chowdhury, Muhammad E. H.
    Classroom EEG recordings classification has the capacity to significantly enhance comprehension and learning by revealing complex neural patterns linked to various cognitive processes. Electroencephalography (EEG) in academic settings allows researchers to study brain activity while students are in class, revealing learning preferences. The purpose of this study was to develop a machine learning framework to automatically classify different learning-style EEG patterns in real classroom environments. Method: In this study, a set of EEG features was investigated, including statistical features, fractal dimension, higher-order spectra, entropy, and a combination of all sets. Three different machine learning classifiers, random forest (RF), K-nearest neighbor (KNN), and multilayer perceptron (MLP), were used to evaluate the performance. The proposed framework was evaluated on the real classroom EEG dataset, involving EEG recordings featuring different teaching blocks: reading, discussion, lecture, and video. Results: The findings revealed that statistical features are the most sensitive feature metric in distinguishing learning patterns from EEG. The statistical features and RF classifier method tested in this study achieved an overall best average accuracy of 78.45% when estimated by fivefold cross-validation. Conclusions: Our results suggest that EEG time domain statistics have a substantial role and are more reliable for internal state classification. This study might be used to highlight the importance of using EEG signals in the education context, opening the path for educational automation research and development.
      40