Options
Yuvaraj Rajamanickam
Preferred name
Yuvaraj Rajamanickam
Email
yuvaraj.rajamanickam@nie.edu.sg
Department
Office of Education Research (OER)
Personal Site(s)
ORCID
13 results
Now showing 1 - 10 of 13
- PublicationMetadata onlyAssessing attentive monitoring levels in dynamic environments through visual neuro-assisted approachObjective This work aims to establish a framework in measuring the various attentional levels of the human operator in a real-time animated environment through a visual neuro-assisted approach.
Background With the increasing trend of automation and remote operations, understanding human-machine interaction in dynamic environments can greatly aid to improve performance, promote operational efficiency and safety.
Method Two independent 1-hour experiments were conducted on twenty participants where eye-tracking metrics and neuro activities from electroencephalogram (EEG) were recorded. The experiments required participants to exhibit attentive behaviour in one set and inattentive in the other. Two segments (“increasing flight numbers” and “relatively constant flight numbers”) were also extracted to study the participants’ visual behavioral differences in relation to aircraft numbers.
Results For the two experimental studies, those in the attentive behavioral study show incidences of higher fixation count, fixation duration, number of aircraft spotted, and landing fixations whereas those in inattentive behavior study reveal higher zero-fixation frame count. In experiments involving ‘increasing flight numbers’, a higher percentage of aircraft were spotted as compared to those with ‘constant flight numbers’ in both the groups. Three parameters (number of aircraft spotted, and landing fixations and zero-fixation frame count) are newly established. As radar monitoring is a brain engagement activity, positive EEG data were registered in all the participants. A newly Task Engagement Index (TEI) was also formulated to predict different attentional levels.
Conclusion Results provide a refined quantifiable tool to differentiate between attentive and inattentive monitoring behavior in a real-time dynamic environment, which can be applied across various sectors.
Recommendation With the quantitative TEI established, this paves the way for future studies into attentional levels by regions, time based, as well as eye signature studies in relation to visual task engagement and management and determining expertise levels to be explored. Factors relating to fatigue could also be investigated using the TEI approach proposed.WOS© Citations 1Scopus© Citations 3 64 - PublicationMetadata onlyA comparative study on EEG features for neonatal seizure detectionEpileptic seizure is one of the common neurological disorders, and its clinical manifestation is different from that of the adult as the neonate’s brain is not yet fully developed. In clinical practice, manual observation of EEG recordings to diagnose and identify epileptic seizures is expensive and time-consuming. Computer-aided diagnosis (CAD) tools will enable clinicians to examine the EEG expeditiously and effectively. In this study, we investigate 37 statistical and time-frequency domain features to discriminate between seizure and non-seizure EEG segments. The analysis is performed on the publicly available Helsinki University database. The significant features were identified by using the Wilcoxon rank sum test and then ranked using manual threshold area under the curve (AUC) value and eXtreme Gradient Boosting (XGBoost) feature importance method. The performance of the features was analyzed using XGBoost and support vector machine (SVM) classifier with fourfold cross-validation. We found that entropy plays a significant role in the discrimination of seizure and non-seizure segments. We achieved an average AUC of 0.84 and 0.76 using XGBoost and SVM classifiers, respectively. This study presents the significance of each extracted feature and will be beneficial to the neurologists for the continuous monitoring and diagnosis of seizures in neonates.
Scopus© Citations 16 137 - PublicationOpen AccessComprehensive analysis of feature extraction methods for emotion recognition from multichannel EEG recordingsAdvances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
WOS© Citations 5Scopus© Citations 26 83 128 - PublicationOpen AccessAutomated classification of student’s emotion through facial expressions using transfer learning(The International Academic Forum, 2023)
; ;Ratnavel Rajalakshmi ;Venkata DhanvanthFogarty, Jack S.Emotions play a critical role in learning. Having a good understanding of student emotions during class is important for students and teachers to improve their teaching and learning experiences. For instance, analyzing students’ emotions during learning can provide teachers with feedback regarding student engagement, enabling teachers to make pedagogical decisions to enhance student learning. This information may also provide students with valuable feedback for improved emotion regulation in learning contexts. In practice, it is not easy for teachers to monitor all students while teaching. In this paper, we propose an automated framework for emotional classification through students’ facial expression and recognizing academic affective states, including amusement, anger, boredom, confusion, engagement, interest, relief, sadness, and surprise. The methodology includes dataset construction, pre-processing, and deep convolutional neural network (CNN) framework based on VGG-19 (pre-trained and configured) as a feature extractor and multi-layer perceptron (MLP) as a classier. To evaluate the performance, we created a dataset of the aforementioned facial expressions from three publicly available datasets that link academic emotions: DAiSEE, Raf-DB, and EmotioNet, as well as classroom videos from the internet. The configured VGG-19 CNN system yields a mean classification accuracy, sensitivity, and specificity of 82.73% ± 2.26, 82.55% ± 2.14, and 97.67% ± 0.45, respectively when estimated by 5-fold cross validation. The result shows that the proposed framework can effectively classify student emotions in class and may provide a useful tool to assist teachers understand the emotional climate in their class, thus enabling them to make more informed pedagogical decisions to improve student learning experiences.27 140 - PublicationMetadata onlyBiomedical signals based computer-aided diagnosis for neurological disordersBiomedical signals provide unprecedented insight into abnormal or anomalous neurological conditions. The computer-aided diagnosis (CAD) system plays a key role in detecting neurological abnormalities and improving diagnosis and treatment consistency in medicine. This book covers different aspects of biomedical signals-based systems used in the automatic detection/identification of neurological disorders. Several biomedical signals are introduced and analyzed, including electroencephalogram (EEG), electrocardiogram (ECG), heart rate (HR), magnetoencephalogram (MEG), and electromyogram (EMG). It explains the role of the CAD system in processing biomedical signals and the application to neurological disorder diagnosis. The book provides the basics of biomedical signal processing, optimization methods, and machine learning/deep learning techniques used in designing CAD systems for neurological disorders.
Scopus© Citations 1 103 - PublicationMetadata onlyImproving automated diagnosis of epilepsy from EEGs beyond IEDs(IOP, 2022)
;Prasanth Thangavel ;Thomas, John ;Nishant Sinha ;Peh Wei Yan; ;Cash, Sydney S. ;Rima Chaudhari ;Sagar Karia ;Jin, Jing ;Rahul Rathakrishnan ;Vinay Saini ;Nilesh Shah ;Rohit Srivastava ;Tan, Yee-Leng ;Westover, BrandonDauwels, JustinObjective: Clinical diagnosis of epilepsy relies partially on identifying Interictal Epileptiform Discharges (IEDs) in scalp electroencephalograms (EEGs). This process is expert-biased, tedious, and can delay the diagnosis procedure. Beyond automatically detecting IEDs, there are far fewer studies on automated methods to differentiate epileptic EEGs (potentially without IEDs) from normal EEGs. In addition, the diagnosis of epilepsy based on a single EEG tends to be low. Consequently, there is a strong need for automated systems for EEG interpretation. Traditionally, epilepsy diagnosis relies heavily on IEDs. However, since not all epileptic EEGs exhibit IEDs, it is essential to explore IEDindependent EEG measures for epilepsy diagnosis. The main objective is to develop an automated system for detecting epileptic EEGs, both with or without IEDs. In order to detect epileptic EEGs without IEDs, it is crucial to include EEG features in the algorithm that are not directly related to IEDs.
Approach: In this study, we explore the background characteristics of interictal EEG for automated and more reliable diagnosis of epilepsy. Specifically, we investigate features based on univariate temporal measures (UTM), spectral, wavelet, Stockwell, connectivity, and graph metrics of EEGs, besides patient-related information (age and vigilance state). The evaluation is performed on a sizeable cohort of routine scalp EEGs (685 epileptic EEGs and 1229 normal EEGs) from five centers across Singapore, USA, and India. Main results: In comparison with the current literature, we obtained an improved Leave-One-Subject-Out (LOSO) cross-validation (CV) area under the curve (AUC) of 0.871 (Balanced Accuracy (BAC) of 80.9%) with a combination of 3 features (IED rate, and Daubechies and Morlet wavelets) for the classification of EEGs with IEDs vs. normal EEGs. The IED-independent feature UTM achieved a LOSO CV AUC of 0.809 (BAC of 74.4%). The inclusion of IED-independent features also helps to improve the EEG-level classification of epileptic EEGs with and without IEDs vs. normal EEGs, achieving an AUC of 0.822 (BAC of 77.6%) compared to 0.688 (BAC of 59.6%) for classification only based on the IED rate. Specifically, the addition of IED-independent features improved the BAC by 21% in detecting epileptic EEGs that do not contain IEDs.
Significance: These results pave the way towards automated detection of epilepsy. We are one of the first to analyse epileptic EEGs without IEDs, thereby opening up an underexplored option in epilepsy diagnosis.WOS© Citations 8Scopus© Citations 11 49 - PublicationMetadata onlyAutomated multi-class seizure-type classification system using EEG signals and machine learning algorithms(IEEE, 2024)
;Abirami, S. ;Tikaram ;Kathiravan, M.; ;Menon, Ramshekhar N. ;Thomas, John ;Karthick, P. A. ;Amalin Prince, A.Agastinose Ronickom, Jac FredoEpilepsy is a chronic brain disorder characterized by recurrent unprovoked seizures. The treatment for epilepsy is influenced by the types of seizures. Therefore, developing a reliable, explainable, and automated system to identify seizure types is necessary. This study aims to automate the process of classification of five seizure types: focal non-specific, generalized, complex partial, absence, and tonic-clonic using electroencephalogram (EEG) signals and machine learning algorithms. The EEG signals of 2933 seizures from 327 patients were obtained from the publicly available Temple University Hospital dataset. Initially, the signals were preprocessed using a standard pipeline, and 110 features from the time, frequency, and time-frequency domain were computed from each seizure. Further, the features were ranked using the statistical test and extreme Gradient Boosting (XGBoost) algorithm to identify the significant features. We built binary and multiclass seizure-type classification systems using the identified features and machine learning algorithms. Our study revealed that the EEG band power between 11–13 Hz, 27–29 Hz, intrinsic mode function (IMF) band power 19–21 Hz, and delta band (1-4 Hz) played a crucial role in discriminating the seizures. We achieved an average accuracy of 88.21% and 69.43% for the binary and multiclass seizure-type classification, respectively, using the XGBoost classifier. We also found that the combination of features performed well compared to any single domain. This automated system has the potential to aid neurologists in making diagnosis of epileptic seizure types. The proposed methodology can be applied alongside the established clinical approach of visual evaluation for the classification of seizure-types.21 - PublicationMetadata onlyA machine learning framework for classroom EEG recording classification: Unveiling learning-style patterns(MDPI, 2024)
; ;Chadha, Shivam ;Prince, A. Amalin ;Murugappan, M. ;Md. Sakib islam ;Md. Shaheenur Islam SumonChowdhury, Muhammad E. H.Classroom EEG recordings classification has the capacity to significantly enhance comprehension and learning by revealing complex neural patterns linked to various cognitive processes. Electroencephalography (EEG) in academic settings allows researchers to study brain activity while students are in class, revealing learning preferences. The purpose of this study was to develop a machine learning framework to automatically classify different learning-style EEG patterns in real classroom environments. Method: In this study, a set of EEG features was investigated, including statistical features, fractal dimension, higher-order spectra, entropy, and a combination of all sets. Three different machine learning classifiers, random forest (RF), K-nearest neighbor (KNN), and multilayer perceptron (MLP), were used to evaluate the performance. The proposed framework was evaluated on the real classroom EEG dataset, involving EEG recordings featuring different teaching blocks: reading, discussion, lecture, and video. Results: The findings revealed that statistical features are the most sensitive feature metric in distinguishing learning patterns from EEG. The statistical features and RF classifier method tested in this study achieved an overall best average accuracy of 78.45% when estimated by fivefold cross-validation. Conclusions: Our results suggest that EEG time domain statistics have a substantial role and are more reliable for internal state classification. This study might be used to highlight the importance of using EEG signals in the education context, opening the path for educational automation research and development.8 - PublicationOpen AccessInvestigating the effects of microclimate on physiological stress and brain function with data science and wearables(MDPI, 2022)
; ;Nguyen, Duc Minh Anh ;Nguyen, Thien Minh Tuan; This paper reports a study conducted by students as an independent research project under the mentorship of a research scientist at the National Institute of Education, Singapore. The aim of the study was to explore the relationships between local environmental stressors and physiological responses from the perspective of citizen science. Starting from July 2021, data from EEG headsets were complemented by those obtained from smartwatches (namely heart rate and its variability and body temperature and stress score). Identical units of a wearable device containing environmental sensors (such as ambient temperature, air pressure, infrared radiation, and relative humidity) were designed and worn, respectively, by five adolescents for the same period. More than 100,000 data points of different types—neurological, physiological, and environmental—were eventually collected and were processed through a random forest regression model and deep learning models. The results showed that the most influential microclimatic factors on the biometric indicators were noise and the concentrations of carbon dioxide and dust. Subsequently, more complex inferences were made from the Shapley value interpretation of the regression models. Such findings suggest implications for the design of living conditions with respect to the interaction of the microclimate and human health and comfort.WOS© Citations 1Scopus© Citations 1 283 124 - PublicationMetadata onlyAbnormal EEG detection using time-frequency images and convolutional neural networkIn the process of diagnosing neurological disorders, neurologists often study the brain activity of the patient recorded in the form of an electroencephalogram (EEG). Identifying an abnormal EEG serves as a preliminary indicator before specialized testing to determine the neurological disorder. Traditional identification methods involve manual perusal of the EEG signals. This method is relatively slow and tedious, requires trained neurologists, and delays the treatment plan. Therefore, the development of an automated abnormal EEG detection system is essential. In this study, we propose a method based on short-time Fourier transform (STFT), which is a time-frequency (TF) representation, and deep convolutional neural network (CNN) to detect abnormal EEGs. First, the filtered time-series EEG signals are converted into TF images by applying STFT. Then, the images are fed to three popular configurable CNN structures, namely, DenseNet, SeizureNet, and Inception-ResNet-V2, to extract deep learned features. Finally, an extreme learning machine (ELM)-based classifier detects the input TF images. The proposed STFT-based CNN method is evaluated using the Temple University Hospital (TUH) abnormal EEG corpus, which is available under the public domain. The experiment showed that the combination of the SeizureNet-ELM model achieved an average (fivefold cross-validation) accuracy, specificity, sensitivity, and F1-score of 85.87%, 88.43%, 83.23%, and 0.858, respectively. The results demonstrate that the proposed framework may aid clinicians in abnormal EEG detection for the early treatment plan.
Scopus© Citations 3 117