Options
Yuvaraj Rajamanickam
Preferred name
Yuvaraj Rajamanickam
Email
yuvaraj.rajamanickam@nie.edu.sg
Department
Office of Education Research (OER)
Personal Site(s)
ORCID
15 results
Now showing 1 - 10 of 15
- PublicationOpen AccessComprehensive analysis of feature extraction methods for emotion recognition from multichannel EEG recordingsAdvances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
WOS© Citations 5Scopus© Citations 26 98 182 - PublicationMetadata onlyAssessing attentive monitoring levels in dynamic environments through visual neuro-assisted approachObjective This work aims to establish a framework in measuring the various attentional levels of the human operator in a real-time animated environment through a visual neuro-assisted approach.
Background With the increasing trend of automation and remote operations, understanding human-machine interaction in dynamic environments can greatly aid to improve performance, promote operational efficiency and safety.
Method Two independent 1-hour experiments were conducted on twenty participants where eye-tracking metrics and neuro activities from electroencephalogram (EEG) were recorded. The experiments required participants to exhibit attentive behaviour in one set and inattentive in the other. Two segments (“increasing flight numbers” and “relatively constant flight numbers”) were also extracted to study the participants’ visual behavioral differences in relation to aircraft numbers.
Results For the two experimental studies, those in the attentive behavioral study show incidences of higher fixation count, fixation duration, number of aircraft spotted, and landing fixations whereas those in inattentive behavior study reveal higher zero-fixation frame count. In experiments involving ‘increasing flight numbers’, a higher percentage of aircraft were spotted as compared to those with ‘constant flight numbers’ in both the groups. Three parameters (number of aircraft spotted, and landing fixations and zero-fixation frame count) are newly established. As radar monitoring is a brain engagement activity, positive EEG data were registered in all the participants. A newly Task Engagement Index (TEI) was also formulated to predict different attentional levels.
Conclusion Results provide a refined quantifiable tool to differentiate between attentive and inattentive monitoring behavior in a real-time dynamic environment, which can be applied across various sectors.
Recommendation With the quantitative TEI established, this paves the way for future studies into attentional levels by regions, time based, as well as eye signature studies in relation to visual task engagement and management and determining expertise levels to be explored. Factors relating to fatigue could also be investigated using the TEI approach proposed.WOS© Citations 1Scopus© Citations 3 83 - PublicationOpen AccessAutomated classification of student’s emotion through facial expressions using transfer learning(The International Academic Forum, 2023)
; ;Ratnavel Rajalakshmi ;Venkata DhanvanthFogarty, Jack S.Emotions play a critical role in learning. Having a good understanding of student emotions during class is important for students and teachers to improve their teaching and learning experiences. For instance, analyzing students’ emotions during learning can provide teachers with feedback regarding student engagement, enabling teachers to make pedagogical decisions to enhance student learning. This information may also provide students with valuable feedback for improved emotion regulation in learning contexts. In practice, it is not easy for teachers to monitor all students while teaching. In this paper, we propose an automated framework for emotional classification through students’ facial expression and recognizing academic affective states, including amusement, anger, boredom, confusion, engagement, interest, relief, sadness, and surprise. The methodology includes dataset construction, pre-processing, and deep convolutional neural network (CNN) framework based on VGG-19 (pre-trained and configured) as a feature extractor and multi-layer perceptron (MLP) as a classier. To evaluate the performance, we created a dataset of the aforementioned facial expressions from three publicly available datasets that link academic emotions: DAiSEE, Raf-DB, and EmotioNet, as well as classroom videos from the internet. The configured VGG-19 CNN system yields a mean classification accuracy, sensitivity, and specificity of 82.73% ± 2.26, 82.55% ± 2.14, and 97.67% ± 0.45, respectively when estimated by 5-fold cross validation. The result shows that the proposed framework can effectively classify student emotions in class and may provide a useful tool to assist teachers understand the emotional climate in their class, thus enabling them to make more informed pedagogical decisions to improve student learning experiences.31 215 - PublicationMetadata onlyAbnormal EEG detection using time-frequency images and convolutional neural networkIn the process of diagnosing neurological disorders, neurologists often study the brain activity of the patient recorded in the form of an electroencephalogram (EEG). Identifying an abnormal EEG serves as a preliminary indicator before specialized testing to determine the neurological disorder. Traditional identification methods involve manual perusal of the EEG signals. This method is relatively slow and tedious, requires trained neurologists, and delays the treatment plan. Therefore, the development of an automated abnormal EEG detection system is essential. In this study, we propose a method based on short-time Fourier transform (STFT), which is a time-frequency (TF) representation, and deep convolutional neural network (CNN) to detect abnormal EEGs. First, the filtered time-series EEG signals are converted into TF images by applying STFT. Then, the images are fed to three popular configurable CNN structures, namely, DenseNet, SeizureNet, and Inception-ResNet-V2, to extract deep learned features. Finally, an extreme learning machine (ELM)-based classifier detects the input TF images. The proposed STFT-based CNN method is evaluated using the Temple University Hospital (TUH) abnormal EEG corpus, which is available under the public domain. The experiment showed that the combination of the SeizureNet-ELM model achieved an average (fivefold cross-validation) accuracy, specificity, sensitivity, and F1-score of 85.87%, 88.43%, 83.23%, and 0.858, respectively. The results demonstrate that the proposed framework may aid clinicians in abnormal EEG detection for the early treatment plan.
Scopus© Citations 3 129 - PublicationMetadata onlyImproving automated diagnosis of epilepsy from EEGs beyond IEDs(IOP Publishing, 2022)
;Thangavel, Prasanth ;Thomas, John ;Sinha, Nishant ;Peh, Wei Yan; ;Cash, Sydney S. ;Chaudhari, Rima ;Karia, Sagar ;Jin, Jing ;Rathakrishnan, Rahul ;Saini, Vinay ;Nilesh Shah ;Srivastava, Rohit ;Tan, Yee-Leng ;Westover, BrandonDauwels, JustinObjective: Clinical diagnosis of epilepsy relies partially on identifying Interictal Epileptiform Discharges (IEDs) in scalp electroencephalograms (EEGs). This process is expert-biased, tedious, and can delay the diagnosis procedure. Beyond automatically detecting IEDs, there are far fewer studies on automated methods to differentiate epileptic EEGs (potentially without IEDs) from normal EEGs. In addition, the diagnosis of epilepsy based on a single EEG tends to be low. Consequently, there is a strong need for automated systems for EEG interpretation. Traditionally, epilepsy diagnosis relies heavily on IEDs. However, since not all epileptic EEGs exhibit IEDs, it is essential to explore IEDindependent EEG measures for epilepsy diagnosis. The main objective is to develop an automated system for detecting epileptic EEGs, both with or without IEDs. In order to detect epileptic EEGs without IEDs, it is crucial to include EEG features in the algorithm that are not directly related to IEDs.
Approach: In this study, we explore the background characteristics of interictal EEG for automated and more reliable diagnosis of epilepsy. Specifically, we investigate features based on univariate temporal measures (UTM), spectral, wavelet, Stockwell, connectivity, and graph metrics of EEGs, besides patient-related information (age and vigilance state). The evaluation is performed on a sizeable cohort of routine scalp EEGs (685 epileptic EEGs and 1229 normal EEGs) from five centers across Singapore, USA, and India. Main results: In comparison with the current literature, we obtained an improved Leave-One-Subject-Out (LOSO) cross-validation (CV) area under the curve (AUC) of 0.871 (Balanced Accuracy (BAC) of 80.9%) with a combination of 3 features (IED rate, and Daubechies and Morlet wavelets) for the classification of EEGs with IEDs vs. normal EEGs. The IED-independent feature UTM achieved a LOSO CV AUC of 0.809 (BAC of 74.4%). The inclusion of IED-independent features also helps to improve the EEG-level classification of epileptic EEGs with and without IEDs vs. normal EEGs, achieving an AUC of 0.822 (BAC of 77.6%) compared to 0.688 (BAC of 59.6%) for classification only based on the IED rate. Specifically, the addition of IED-independent features improved the BAC by 21% in detecting epileptic EEGs that do not contain IEDs.
Significance: These results pave the way towards automated detection of epilepsy. We are one of the first to analyse epileptic EEGs without IEDs, thereby opening up an underexplored option in epilepsy diagnosis.WOS© Citations 8Scopus© Citations 11 53 - PublicationMetadata onlyAutomated recognition of teacher and student activities in the classroom environment: A deep learning frameworkTeacher and student behavior during class is often observed by education professionals to evaluate and develop a teacher’s skill, adapt lesson plans, or monitor and regulate student learning and other activities. Traditional methods rely on accurate manual techniques involving in-person field observations, questionnaires, or the subjective annotation of video recordings. These techniques are time-consuming and typically demand observation and coding by a trained professional. Thus, developing automated tools for detecting classroom behaviors using artificial intelligence could greatly reduce the resources needed to monitor teacher and student behaviors for research, practice, or professional development purposes. This paper presents an automated framework using a deep learning approach to recognize classroom activities encompassing both student and teacher behaviors from classroom videos. The proposed method utilizes a long-term recurrent convolutional network (LRCN), which captures the spatiotemporal features from the video frames. For evaluation purposes, experiments were carried out on a subset of EduNet and an independent dataset composed of classroom videos collected from the internet. The proposed LRCN system achieved a maximum average accuracy (ACC) of 93.17%, precision (PRE) of 94.21%, recall (REC) of 91.76%, and F1-Score (F1-S) of 92.60% on the EduNet dataset when estimated by 5-fold cross-validation. The system provides ACC =83.33%, PRE =89.25%, REC =83.32%, and F1-S =82.14% when applied to independent testing, which ensures reliability. The study has significant methodological implications for the automated recognition of classroom activities and may assist in providing information about classroom behaviors that are worthy of inclusion in the evaluation of education quality.
43 - PublicationMetadata onlyAffective computing for learning in education: A systematic review and bibliometric analysisAffective computing is an emerging area of education research and has the potential to enhance educational outcomes. Despite the growing number of literature studies, there are still deficiencies and gaps in the domain of affective computing in education. In this study, we systematically review affective computing in the education domain. Methods: We queried four well-known research databases, namely the Web of Science Core Collection, IEEE Xplore, ACM Digital Library, and PubMed, using specific keywords for papers published between January 2010 and July 2023. Various relevant data items are extracted and classified based on a set of 15 extensive research questions. Following the PRISMA 2020 guidelines, a total of 175 studies were selected and reviewed in this work from among 3102 articles screened. The data show an increasing trend in publications within this domain. The most common research purpose involves designing emotion recognition/expression systems. Conventional textual questionnaires remain the most popular channels for affective measurement. Classrooms are identified as the primary research environments; the largest research sample group is university students. Learning domains are mainly associated with science, technology, engineering, and mathematics (STEM) courses. The bibliometric analysis reveals that most publications are affiliated with the USA. The studies are primarily published in journals, with the majority appearing in the Frontiers in Psychology journal. Research gaps, challenges, and potential directions for future research are explored. This review synthesizes current knowledge regarding the application of affective computing in the education sector. This knowledge is useful for future directions to help educational researchers, policymakers, and practitioners deploy affective computing technology to broaden educational practices.
47 - PublicationMetadata onlyEmoRoom: Unveiling academic emotions through interactive visual analytics in classroom videosMeasuring emotions in educational settings can provide important information in predicting and explaining student learning outcomes. Knowledge of student’s classroom emotions can also help teachers understand their students’ learning behaviors, improve their teaching methods, and optimize students’ learning and development. However, it can be highly challenging for teachers to monitor and accurately understand student emotions within classroom or group contexts, especially while they are actively teaching and attending to many students simultaneously. Video recording classroom activity can address that issue as high-definition cameras can be used to continuously record groups of students, enabling online or offline analyses of student emotions and supplementing teacher monitoring, retrieval, and interpretation of those processes. Teachers find it difficult to use existing emotion recognition methods to analyze student behaviors in videos due to a lack of user-friendly interfaces that facilitate automatic analysis. To address this challenge, we developed EmoRoom, an open-source tool designed to simplify the annotation and analysis of videos from an emotional perspective. EmoRoom tool offers a practical solution for quantifying and visualizing the frequency of emotions for individuals of interest. Furthermore, it can assist teachers in refining their teaching methods by offering a comprehensive overview of students’ emotional experiences during learning.
39 - PublicationMetadata onlyAutomated multi-class seizure-type classification system using EEG signals and machine learning algorithms(IEEE, 2024)
;Abirami, S. ;Tikaram ;Kathiravan, M.; ;Menon, Ramshekhar N. ;Thomas, John ;Karthick, P. A. ;Prince, A. AmalinRonickom, Jac Fredo AgastinoseEpilepsy is a chronic brain disorder characterized by recurrent unprovoked seizures. The treatment for epilepsy is influenced by the types of seizures. Therefore, developing a reliable, explainable, and automated system to identify seizure types is necessary. This study aims to automate the process of classification of five seizure types: focal non-specific, generalized, complex partial, absence, and tonic-clonic using electroencephalogram (EEG) signals and machine learning algorithms. The EEG signals of 2933 seizures from 327 patients were obtained from the publicly available Temple University Hospital dataset. Initially, the signals were preprocessed using a standard pipeline, and 110 features from the time, frequency, and time-frequency domain were computed from each seizure. Further, the features were ranked using the statistical test and extreme Gradient Boosting (XGBoost) algorithm to identify the significant features. We built binary and multiclass seizure-type classification systems using the identified features and machine learning algorithms. Our study revealed that the EEG band power between 11–13 Hz, 27–29 Hz, intrinsic mode function (IMF) band power 19–21 Hz, and delta band (1-4 Hz) played a crucial role in discriminating the seizures. We achieved an average accuracy of 88.21% and 69.43% for the binary and multiclass seizure-type classification, respectively, using the XGBoost classifier. We also found that the combination of features performed well compared to any single domain. This automated system has the potential to aid neurologists in making diagnosis of epileptic seizure types. The proposed methodology can be applied alongside the established clinical approach of visual evaluation for the classification of seizure-types.60 - PublicationOpen AccessEmotion recognition from spatio-temporal representation of EEG signals via 3D-CNN with ensemble learning techniques(MDPI, 2023)
; ;Baranwal, Arapan ;Prince, A. Amalin ;Murugappan, M.Javeed Shaikh MohammedThe recognition of emotions is one of the most challenging issues in human-computer interaction (HCI). EEG signals are widely adopted as a method for recognizing emotions because of their ease of acquisition, mobility, and convenience. Deep neural networks (DNN) have provided excellent results in motion recognition studies. Most studies, however, use other methods to extract handcrafted features, such as Pearson correlation coefficient (PCC), Principal Component Analysis, Higuchi Fractal Dimension (HFD), etc., even though DNN is capable of generating meaningful features. Furthermore, most earlier studies largely ignored spatial information between the different channels, focusing mainly on time domain and frequency domain representations. This study utilizes a pre-trained 3D-CNN MobileNet model with transfer learning on the spatio-temporal representation of EEG signals to extract features for emotion recognition. In addition to fully connected layers, hybrid models were explored using other decision layers such as multilayer perceptron (MLP), k nearest neighbor (KNN), extreme learning machine (ELM), XGBoost (XGB), random forest (RF), and support vector machine (SVM). Additionally, this study investigates the effects of post processing or filtering output labels. Extensive experiments were conducted on the SJTU Emotion EEG Dataset (SEED) (three classes) and SEED-IV (four classes) datasets, and the results obtained were comparable to the state-of-the-art. Based on the conventional 3D-CNN with ELM classifier, SEED and SEED-IV datasets showed a maximum accuracy of 89.18% and 81.60%, respectively. Post-filtering improved the emotional classification performance in the hybrid 3D-CNN with ELM model for SEED and SEED-IV datasets to 90.85% and 83.71%, respectively. Accordingly, spatial-temporal features extracted from the EEG, along with ensemble classifiers, were found to be the most effective in recognizing emotions compared to state-of the-art methods.WOS© Citations 3Scopus© Citations 10 101 197