Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = behaviour pattern classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 577 KB  
Article
Machine Learning Classification of Customer Perceptions of Public Passenger Transport with a Focus on Ecological and Economic Determinants
by Eva Kicova, Lucia Duricova, Lubica Gajanova and Juraj Fabus
Systems 2026, 14(2), 143; https://doi.org/10.3390/systems14020143 - 29 Jan 2026
Viewed by 198
Abstract
Public passenger transport systems increasingly face the challenge of balancing economic efficiency with ecological sustainability, reflecting both policy objectives and passenger expectations. This study examines passenger perceptions of the economic and environmental aspects of public transport services and the factors influencing these perceptions, [...] Read more.
Public passenger transport systems increasingly face the challenge of balancing economic efficiency with ecological sustainability, reflecting both policy objectives and passenger expectations. This study examines passenger perceptions of the economic and environmental aspects of public transport services and the factors influencing these perceptions, primarily based on survey data collected in Slovakia. The Slovak dataset was analysed using contingency analysis, namely Chi-square tests of independence, contingency coefficients, and sign scheme, and C5.0 decision tree classification models to identify key determinant of behavioural and attitudinal outcomes. In addition, descriptive comparisons with a complementary Polish sample are provided to illustrate potential differences in preference patterns across national contexts, without formal statistical inference. The results identify key socio-demographic and behavioural factors influencing passenger perceptions and usage patterns in Slovakia, while the complementary Polish sample is used to provide contextual descriptive comparison without formal testing. The study enhances scientific understanding of public transport by exploring the interaction between economic efficiency and ecological sustainability of transport services and provides practical recommendations for the strategic management of transport companies, especially in service modernisation, marketing communication, and support for sustainable mobility. The findings are relevant not only to Slovakia but also to broader European discussions on integrating economic and environmental dimensions into public transport development. Full article
(This article belongs to the Section Systems Theory and Methodology)
39 pages, 578 KB  
Article
Generational and Economic Differences in the Effectiveness of Product Placement: A Predictive Approach Using CART Analysis
by David Vrtana and Lucia Duricova
Adm. Sci. 2026, 16(2), 61; https://doi.org/10.3390/admsci16020061 - 23 Jan 2026
Viewed by 371
Abstract
Product placement has become an integral part of contemporary marketing communication, aiming to influence consumer attitudes and purchasing behaviour through subtle brand exposure in audiovisual media. Despite its growing prevalence, the effectiveness of product placement in shaping purchase intentions remains influenced by various [...] Read more.
Product placement has become an integral part of contemporary marketing communication, aiming to influence consumer attitudes and purchasing behaviour through subtle brand exposure in audiovisual media. Despite its growing prevalence, the effectiveness of product placement in shaping purchase intentions remains influenced by various demographic and behavioural factors. This study examines how demographic and economic factors jointly shape consumer responses to product placement and identifies the key determinants of consumers’ likelihood of purchasing products featured in audiovisual media. Data for the study were collected through a questionnaire survey and analysed using a combination of non-parametric subgroup tests, contingency-based association analysis, and machine-learning classification methods to assess both marginal group differences and multivariate interaction patterns. In addition to inferential testing, predictive models were developed using CART and alternative modelling techniques to verify the robustness of the identified predictors across analytical frameworks. The results reveal statistically significant generational and economic heterogeneity in awareness of product placement and purchase probability, highlighting the dominant role of age in shaping purchasing behaviour. The findings contribute to a deeper understanding of behavioural segmentation in audiovisual marketing and provide insights for optimising marketing communication strategies within audiovisual content. Full article
Show Figures

Figure 1

22 pages, 2885 KB  
Article
Classifying National Pathways of Sustainable Development Through Bayesian Probabilistic Modelling
by Oksana Liashenko, Kostiantyn Pavlov, Olena Pavlova, Robert Chmura, Aneta Czechowska-Kosacka, Tetiana Vlasenko and Anna Sabat
Sustainability 2026, 18(2), 601; https://doi.org/10.3390/su18020601 - 7 Jan 2026
Viewed by 293
Abstract
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies [...] Read more.
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies of national performance across the seventeen Sustainable Development Goals. Unlike traditional ranking systems or composite indices, the proposed method uses raw, standardised goal-level indicators and accounts for both structural variation and classification uncertainty. The model integrates a Bayesian decision tree with penalised spline regressions and includes regional covariates to capture context-sensitive dynamics. Based on publicly available global datasets covering more than 150 countries, the analysis identifies three distinct development profiles: structurally vulnerable systems, transitional configurations, and consolidated performers. Posterior probabilities enable soft classification, highlighting ambiguous or hybrid country profiles that do not fit neatly into a single category. Results reveal both monotonic and non-monotonic indicator behaviours, including saturation effects in infrastructure-related goals and paradoxical patterns in climate performance. This typology-sensitive approach provides a transparent and interpretable alternative to aggregated indices, supporting more differentiated and evidence-based sustainability assessments. The findings provide a practical basis for tailoring national strategies to structural conditions and the multidimensional nature of sustainable development. Full article
Show Figures

Figure 1

32 pages, 2990 KB  
Article
Enhancing Classification Results of Slope Entropy Using Downsampling Schemes
by Vicent Moltó-Gallego, David Cuesta-Frau and Mahdy Kouka
Axioms 2025, 14(11), 797; https://doi.org/10.3390/axioms14110797 - 29 Oct 2025
Viewed by 527
Abstract
Entropy calculation provides meaningful insight into the dynamics and complexity of temporal signals, playing a crucial role in classification tasks. These measures are able to describe intrinsic characteristics of temporal series, such as regularity, complexity or predictability. Depending on the characteristics of the [...] Read more.
Entropy calculation provides meaningful insight into the dynamics and complexity of temporal signals, playing a crucial role in classification tasks. These measures are able to describe intrinsic characteristics of temporal series, such as regularity, complexity or predictability. Depending on the characteristics of the signal under study, the performance of entropy as a feature for classification may vary, and not any kind of entropy calculation technique may be suitable for that specific signal. Therefore, we aim to increase entropy’s classification accuracy performance, specially in the case of Slope Entropy (SlpEn), by enhancing the information content of the patterns present in the data before calculating the entropy, with downsampling techniques. More specifically, we will be using both uniform downsampling (UDS) and non-uniform downsampling techniques. In the case of non-uniform downsapling, the technique used is known as Trace Segmentation (TS), which is a non-uniform downsampling scheme that is able to enhance the most prominent patterns present in a temporal series while discarding the less relevant ones. SlpEn is a novel method recently proposed in the field of time series entropy estimation that in general outperforms other methods in classification tasks. We will combine it both with TS or UDS. In addition, since both techniques reduce the number of samples that the entropy will be calculated on, it can significantly decrease the computation time. In this work, we apply TS or UDS to the data before calculating SlpEn to assess how downsampling can impact the behaviour of SlpEn in terms of performance and computational cost, experimenting on different kinds of datasets. In addition, we carry out a comparison between SlpEn and one of the most commonly used entropy calculation methods: Permutation Entropy (PE). Results show that both uniform and non-uniform downsampling are able to enhance the performance of both SlpEn and PE when used as the only features in classification tasks, gaining up to 13% and 22% in terms of accuracy, respectively, when using TS and up to 10% and 21% when using UDS. In addition, when downsampling to 50% of the original data, we obtain a speedup around ×2 with individual entropy calculations, while, when incorporating the downsampling algorithms into time count, speedups with UDS are between ×1.2 and ×1.7, depending on the dataset. With TS, these speedups are above ×2, while maintaining accuracy levels similar to those obtained when using the 100% of the original data. Our findings suggest that most temporal series, specially medical ones, have been measured using a sampling frequency above the optimal threshold, thus obtaining unnecessary information for classification tasks, which is then discarded when performing downsampling. Downsampling techniques are potentially beneficial to any kind of entropy calculation technique, not only those used in the paper. It is able to enhance entropy’s performance in classification tasks while reducing its computation time, thus resulting in a win-win situation. We recommend to downsample to percentages between 20% and 45% of the original data to obtain the best results in terms of accuracy in classification tasks. Full article
Show Figures

Figure 1

50 pages, 4484 KB  
Systematic Review
Bridging Data and Diagnostics: A Systematic Review and Case Study on Integrating Trend Monitoring and Change Point Detection for Wind Turbines
by Abu Al Hassan and Phong Ba Dao
Energies 2025, 18(19), 5166; https://doi.org/10.3390/en18195166 - 28 Sep 2025
Cited by 5 | Viewed by 1299
Abstract
Wind turbines face significant operational challenges due to their complex electromechanical systems, exposure to harsh environmental conditions, and high maintenance costs. Reliable structural health monitoring and condition monitoring are therefore essential for early fault detection, minimizing downtime, and optimizing maintenance strategies. Traditional approaches [...] Read more.
Wind turbines face significant operational challenges due to their complex electromechanical systems, exposure to harsh environmental conditions, and high maintenance costs. Reliable structural health monitoring and condition monitoring are therefore essential for early fault detection, minimizing downtime, and optimizing maintenance strategies. Traditional approaches typically rely on either Trend Monitoring (TM) or Change Point Detection (CPD). TM methods track the long-term behaviour of process parameters, using statistical analysis or machine learning (ML) to identify abnormal patterns that may indicate emerging faults. In contrast, CPD techniques focus on detecting abrupt changes in time-series data, identifying shifts in mean, variance, or distribution, and providing accurate fault onset detection. While each approach has strengths, they also face limitations: TM effectively identifies fault type but lacks precision in timing, while CPD excels at locating fault occurrence but lacks detailed fault classification. This review critically examines the integration of TM and CPD methods for wind turbine diagnostics, highlighting their complementary strengths and weaknesses through an analysis of widely used TM techniques (e.g., Fast Fourier Transform, Wavelet Transform, Hilbert–Huang Transform, Empirical Mode Decomposition) and CPD methods (e.g., Bayesian Online Change Point Detection, Kullback–Leibler Divergence, Cumulative Sum). By combining both approaches, diagnostic accuracy can be enhanced, leveraging TM’s detailed fault characterization with CPD’s precise fault timing. The effectiveness of this synthesis is demonstrated in a case study on wind turbine blade fault diagnosis. Results shows that TM–CPD integration enhances early detection through coupling vibration and frequency trend analysis with robust statistical validation of fault onset. Full article
Show Figures

Figure 1

18 pages, 574 KB  
Article
Cognitive Profiling of Children and Adolescents with ADHD Using the WISC-IV
by Megan Rosales-Gómez, Ignasi Navarro-Soria, Manuel Torrecillas, María Eugenia López and Beatriz Delgado
Behav. Sci. 2025, 15(9), 1279; https://doi.org/10.3390/bs15091279 - 18 Sep 2025
Viewed by 3702
Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a prevalent neurodevelopmental disorder characterised by cognitive and behavioural impairments. This study aimed to identify cognitive patterns associated with ADHD in a sample of 719 children and adolescents (363 with ADHD and 356 controls) assessed using the [...] Read more.
Attention Deficit Hyperactivity Disorder (ADHD) is a prevalent neurodevelopmental disorder characterised by cognitive and behavioural impairments. This study aimed to identify cognitive patterns associated with ADHD in a sample of 719 children and adolescents (363 with ADHD and 356 controls) assessed using the Wechsler Intelligence Scale for Children—Fourth Edition (WISC-IV). Compared to controls, the clinical group exhibited significantly lower scores in the Working Memory Index (WMI), Processing Speed Index (PSI), and Cognitive Proficiency Index (CPI). No significant group differences were found in Verbal Comprehension (VCI) or Perceptual Reasoning (PRI) after controlling for age and sex. Factorial MANOVA results revealed that WMI, PSI, and CPI deficits remained stable across age groups and were more pronounced in males. Females with ADHD outperformed males in PSI. A binary logistic regression model including WISC-IV core indices classified VCI, PRI, WMI, and PSI with a Nagelkerke R2 of 0.44 as significant predictors of group membership, indicating that lower scores in WMI and PSI, and higher scores in VCI and PRI, increased the likelihood of ADHD classification. These findings reinforce the use of the WISC-IV as a complementary tool in the cognitive characterisation and clinical assessment of ADHD in youth. Full article
Show Figures

Figure 1

32 pages, 5664 KB  
Article
Static and Dynamic Malware Analysis Using CycleGAN Data Augmentation and Deep Learning Techniques
by Moses Ashawa, Robert McGregor, Nsikak Pius Owoh, Jude Osamor and John Adejoh
Appl. Sci. 2025, 15(17), 9830; https://doi.org/10.3390/app15179830 - 8 Sep 2025
Viewed by 1428
Abstract
The increasing sophistication of malware and the use of evasive techniques such as obfuscation pose significant challenges to traditional detection methods. This paper presents a deep convolutional neural network (CNN) framework that integrates static and dynamic analysis for malware classification using RGB image [...] Read more.
The increasing sophistication of malware and the use of evasive techniques such as obfuscation pose significant challenges to traditional detection methods. This paper presents a deep convolutional neural network (CNN) framework that integrates static and dynamic analysis for malware classification using RGB image representations. Binary and memory dump files are transformed into images to capture structural and behavioural patterns often missed in raw formats. The proposed system comprises two tailored CNN architectures: a static model with four convolutional blocks designed for binary-derived images and a dynamic model with three blocks optimised for noisy memory dump data. To enhance generalisation, we employed Cycle-Consistent Generative Adversarial Networks (CycleGANs) for cross-domain image augmentation, expanding the dataset to over 74,000 RGB images sourced from benchmark repositories (MaleVis and Dumpware10). The static model achieved 99.45% accuracy and perfect recall, demonstrating high sensitivity with minimal false positives. The dynamic model achieved 99.21% accuracy. Experimental results demonstrate that the fused approach effectively detects malware variants by learning discriminative visual patterns from both structural and runtime perspectives. This research contributes to a scalable and robust solution for malware classification unlike a single approach. Full article
Show Figures

Figure 1

13 pages, 2010 KB  
Article
Electroencephalography Signatures Associated with Developmental Dyslexia Identified Using Principal Component Analysis
by Günet Eroğlu and Mhd Raja Abou Harb
Diagnostics 2025, 15(17), 2168; https://doi.org/10.3390/diagnostics15172168 - 27 Aug 2025
Viewed by 1036
Abstract
Background/Objectives: Developmental dyslexia is characterised by neuropsychological processing deficits and marked hemispheric functional asymmetries. To uncover latent neurophysiological features linked to reading impairment, we applied dimensionality reduction and clustering techniques to high-density electroencephalographic (EEG) recordings. We further examined the functional relevance of these [...] Read more.
Background/Objectives: Developmental dyslexia is characterised by neuropsychological processing deficits and marked hemispheric functional asymmetries. To uncover latent neurophysiological features linked to reading impairment, we applied dimensionality reduction and clustering techniques to high-density electroencephalographic (EEG) recordings. We further examined the functional relevance of these features to reading performance under standardised test conditions. Methods: EEG data were collected from 200 children (100 with dyslexia and 100 age- and IQ-matched typically developing controls). Principal Component Analysis (PCA) was applied to high-dimensional EEG spectral power datasets to extract latent neurophysiological components. Twelve principal components, collectively accounting for 84.2% of the variance, were retained. K-means clustering was performed on the PCA-derived components to classify participants. Group differences in spectral power were evaluated, and correlations between principal component scores and reading fluency, measured by the TILLS Reading Fluency Subtest, were computed. Results: K-means clustering trained on PCA-derived features achieved a classification accuracy of 89.5% (silhouette coefficient = 0.67). Dyslexic participants exhibited significantly higher right parietal–occipital alpha (P8) power compared to controls (mean = 3.77 ± 0.61 vs. 2.74 ± 0.56; p < 0.001). Within the dyslexic group, PC1 scores were strongly negatively correlated with reading fluency (r = −0.61, p < 0.001), underscoring the functional relevance of EEG-derived components to behavioural reading performance. Conclusions: PCA-derived EEG patterns can distinguish between dyslexic and typically developing children with high accuracy, revealing spectral power differences consistent with atypical hemispheric specialisation. These results suggest that EEG-derived neurophysiological features hold promise for early dyslexia screening. However, before EEG can be firmly established as a reliable molecular biomarker, further multimodal research integrating EEG with immunological, neurochemical, and genetic measures is warranted. Full article
(This article belongs to the Special Issue EEG Analysis in Diagnostics)
Show Figures

Figure 1

20 pages, 12036 KB  
Article
Spatiotemporal Mapping of Grazing Livestock Behaviours Using Machine Learning Algorithms
by Guo Ye and Rui Yu
Sensors 2025, 25(15), 4561; https://doi.org/10.3390/s25154561 - 23 Jul 2025
Cited by 1 | Viewed by 1221
Abstract
Grassland ecosystems are fundamentally shaped by the complex behaviours of livestock. While most previous studies have monitored grassland health using vegetation indices, such as NDVI and LAI, fewer have investigated livestock behaviours as direct drivers of grassland degradation. In particular, the spatial clustering [...] Read more.
Grassland ecosystems are fundamentally shaped by the complex behaviours of livestock. While most previous studies have monitored grassland health using vegetation indices, such as NDVI and LAI, fewer have investigated livestock behaviours as direct drivers of grassland degradation. In particular, the spatial clustering and temporal concentration patterns of livestock behaviours are critical yet underexplored factors that significantly influence grassland ecosystems. This study investigated the spatiotemporal patterns of livestock behaviours under different grazing management systems and grazing-intensity gradients (GIGs) in Wenchang, China, using high-resolution GPS tracking data and machine learning classification. the K-Nearest Neighbours (KNN) model combined with SMOTE-ENN resampling achieved the highest accuracy, with F1-scores of 0.960 and 0.956 for continuous and rotational grazing datasets. The results showed that the continuous grazing system failed to mitigate grazing pressure when grazing intensity was reduced, as the spatial clustering of livestock behaviours did not decrease accordingly, and the frequency of temporal peaks in grazing behaviour even showed an increasing trend. Conversely, the rotational grazing system responded more effectively, as reduced GIGs led to more evenly distributed temporal activity patterns and lower spatial clustering. These findings highlight the importance of incorporating livestock behavioural patterns into grassland monitoring and offer data-driven insights for sustainable grazing management. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

25 pages, 9742 KB  
Article
Autism Spectrum Disorder Detection Using Skeleton-Based Body Movement Analysis via Dual-Stream Deep Learning
by Jungpil Shin, Abu Saleh Musa Miah, Manato Kakizaki, Najmul Hassan and Yoichi Tomioka
Electronics 2025, 14(11), 2231; https://doi.org/10.3390/electronics14112231 - 30 May 2025
Cited by 2 | Viewed by 2494
Abstract
Autism Spectrum Disorder (ASD) poses significant challenges in diagnosis due to its diverse symptomatology and the complexity of early detection. Atypical gait and gesture patterns, prominent behavioural markers of ASD, hold immense potential for facilitating early intervention and optimising treatment outcomes. These patterns [...] Read more.
Autism Spectrum Disorder (ASD) poses significant challenges in diagnosis due to its diverse symptomatology and the complexity of early detection. Atypical gait and gesture patterns, prominent behavioural markers of ASD, hold immense potential for facilitating early intervention and optimising treatment outcomes. These patterns can be efficiently and non-intrusively captured using modern computational techniques, making them valuable for ASD recognition. Various types of research have been conducted to detect ASD through deep learning, including facial feature analysis, eye gaze analysis, and movement and gesture analysis. In this study, we optimise a dual-stream architecture that combines image classification and skeleton recognition models to analyse video data for body motion analysis. The first stream processes Skepxels—spatial representations derived from skeleton data—using ConvNeXt-Base, a robust image recognition model that efficiently captures aggregated spatial embeddings. The second stream encodes angular features, embedding relative joint angles into the skeleton sequence and extracting spatiotemporal dynamics using Multi-Scale Graph 3D Convolutional Network(MSG3D), a combination of Graph Convolutional Networks (GCNs) and Temporal Convolutional Networks (TCNs). We replace the ViT model from the original architecture with ConvNeXt-Base to evaluate the efficacy of CNN-based models in capturing gesture-related features for ASD detection. Additionally, we experimented with a Stack Transformer in the second stream instead of MSG3D but found it to result in lower performance accuracy, thus highlighting the importance of GCN-based models for motion analysis. The integration of these two streams ensures comprehensive feature extraction, capturing both global and detailed motion patterns. A pairwise Euclidean distance loss is employed during training to enhance the consistency and robustness of feature representations. The results from our experiments demonstrate that the two-stream approach, combining ConvNeXt-Base and MSG3D, offers a promising method for effective autism detection. This approach not only enhances accuracy but also contributes valuable insights into optimising deep learning models for gesture-based recognition. By integrating image classification and skeleton recognition, we can better capture both global and detailed motion patterns, which are crucial for improving early ASD diagnosis and intervention strategies. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

15 pages, 278 KB  
Article
Hepatocellular Carcinoma in Delta Hepatitis Versus HBV Monoinfection: Spot the Differences
by Razvan Cerban, Mirela Chitul, Speranta Iacob, Daria Gheorghe, Diana Georgiana Stan and Liana Gheorghe
Livers 2025, 5(2), 23; https://doi.org/10.3390/livers5020023 - 23 May 2025
Viewed by 1804
Abstract
Background: Hepatitis delta virus (HDV) was recently proven to be directly carcinogenic on hepatocytes via different mechanisms compared to hepatitis B virus (HBV). Our study evaluated the differences between hepatocellular carcinoma (HCC) behaviour in both cases. Methods: A retrospective tertiary care centre study [...] Read more.
Background: Hepatitis delta virus (HDV) was recently proven to be directly carcinogenic on hepatocytes via different mechanisms compared to hepatitis B virus (HBV). Our study evaluated the differences between hepatocellular carcinoma (HCC) behaviour in both cases. Methods: A retrospective tertiary care centre study was conducted and included all HBsAg-positive adult patients admitted from the 1st of January 2021 to the 31st of December 2022. IBM SPSS 29.0 was used for statistics. Patients were split into a control group, HBV + HCC, and a study group, HBV + HDV + HCC. Results: A total of 679 patients were included, with an estimated prevalence of HCC in the HDV population of 20.8% versus 9.1% in the control group, p < 0.001, with an OR = 2.263 and a CI 95% of (1.536–3.333), p = 0.001. Younger patients developed HCC in the HBV monoinfection group (mean ± SD, 50.65 ± 12.302 years vs. 51.4 ± 13.708, p = 0.457). Study group patients had smaller tumours (maximum diameter: 32.66 ± 23.181 mm vs. 56.75 ± 38.09 mm, p = 0.002), lower AFP values (177.24 ± 364.8 ng/mL vs. 183.07 ± 336.77 ng/mL, p = 0.941) and predominantly loco-regional treatment. BCLC classification (p = 0.001) and the AFP-Duvoux score (p = 0.001) showed more advanced HCC in HBV monoinfection, with access to mainly systemic therapies (p < 0.001). Conclusions: HCC is more frequent in HDV-infected patients, leading to a different HCC pattern, with smaller tumours, less advanced neoplasia and less access to curative treatment compared to HBV-monoinfection-associated HCC. Full article
(This article belongs to the Special Issue Clinical Management of Liver Cancers)
25 pages, 1517 KB  
Article
Towards Structured Gaze Data Classification: The Gaze Data Clustering Taxonomy (GCT)
by Yahdi Siradj, Kiki Maulana Adhinugraha and Eric Pardede
Multimodal Technol. Interact. 2025, 9(5), 42; https://doi.org/10.3390/mti9050042 - 3 May 2025
Cited by 1 | Viewed by 1538
Abstract
Gaze data analysis plays a crucial role in understanding human visual attention and behaviour. However, raw gaze data is often noisy and lacks inherent structure, making interpretation challenging. Therefore, preprocessing techniques such as classification are essential to extract meaningful patterns and improve the [...] Read more.
Gaze data analysis plays a crucial role in understanding human visual attention and behaviour. However, raw gaze data is often noisy and lacks inherent structure, making interpretation challenging. Therefore, preprocessing techniques such as classification are essential to extract meaningful patterns and improve the reliability of gaze-based analysis. This study introduces the Gaze Data Clustering Taxonomy (GCT), a novel approach that categorises gaze data into structured clusters to improve its reliability and interpretability. GCT classifies gaze data based on cluster count, target presence, and spatial–temporal relationships, allowing for more precise gaze-to-target association. We utilise several machine learning techniques, such as k-NN, k-Means, and DBScan, to apply the taxonomy to a Random Saccade Task dataset, demonstrating its effectiveness in gaze classification. Our findings highlight how clustering provides a structured approach to gaze data preprocessing by distinguishing meaningful patterns from unreliable data. Full article
Show Figures

Figure 1

25 pages, 4434 KB  
Article
Transforming Building Energy Management: Sparse, Interpretable, and Transparent Hybrid Machine Learning for Probabilistic Classification and Predictive Energy Modelling
by Yiping Meng, Yiming Sun, Sergio Rodriguez and Binxia Xue
Architecture 2025, 5(2), 24; https://doi.org/10.3390/architecture5020024 - 31 Mar 2025
Cited by 2 | Viewed by 2119
Abstract
The building sector, responsible for 40% of global energy consumption, faces increasing demands for sustainability and energy efficiency. Accurate energy consumption forecasting is essential to optimise performance and reduce environmental impact. This study introduces a hybrid machine learning framework grounded in Sparse, Interpretable, [...] Read more.
The building sector, responsible for 40% of global energy consumption, faces increasing demands for sustainability and energy efficiency. Accurate energy consumption forecasting is essential to optimise performance and reduce environmental impact. This study introduces a hybrid machine learning framework grounded in Sparse, Interpretable, and Transparent (SIT) modelling to enhance building energy management. Leveraging the REFIT Smart Home Dataset, the framework integrates occupancy pattern analysis, appliance-level energy prediction, and probabilistic uncertainty quantification. The framework clusters occupancy-driven energy usage patterns using K-means and Gaussian Mixture Models, identifying three distinct household profiles: high-energy frequent occupancy, moderate-energy variable occupancy, and low-energy irregular occupancy. A Random Forest classifier is employed to pinpoint key appliances influencing occupancy, with a drop-in accuracy analysis verifying their predictive power. Uncertainty analysis quantifies classification confidence, revealing ambiguous periods linked to irregular appliance usage patterns. Additionally, time-series decomposition and appliance-level predictions are contextualised with seasonal and occupancy dynamics, enhancing interpretability. Comparative evaluations demonstrate the framework’s superior predictive accuracy and transparency over traditional single machine learning models, including Support Vector Machines (SVM) and XGBoost in Matlab 2024b and Python 3.10. By capturing occupancy-driven energy behaviours and accounting for inherent uncertainties, this research provides actionable insights for adaptive energy management. The proposed SIT hybrid model can contribute to sustainable and resilient smart energy systems, paving the way for efficient building energy management strategies. Full article
Show Figures

Figure 1

18 pages, 576 KB  
Review
Autism Data Classification Using AI Algorithms with Rules: Focused Review
by Abdulhamid Alsbakhi, Fadi Thabtah and Joan Lu
Bioengineering 2025, 12(2), 160; https://doi.org/10.3390/bioengineering12020160 - 7 Feb 2025
Cited by 4 | Viewed by 3647
Abstract
Autism Spectrum Disorder (ASD) presents challenges in early screening due to its varied nature and sophisticated early signs. From a machine-learning (ML) perspective, the primary challenges include the need for large, diverse datasets, managing the variability in ASD symptoms, providing easy-to-understand models, and [...] Read more.
Autism Spectrum Disorder (ASD) presents challenges in early screening due to its varied nature and sophisticated early signs. From a machine-learning (ML) perspective, the primary challenges include the need for large, diverse datasets, managing the variability in ASD symptoms, providing easy-to-understand models, and ensuring ASD predictive models that can be employed across different populations. Interpretable or explainable classification algorithms, like rule-based or decision tree, play a crucial role in dealing with some of these issues by offering classification models that can be exploited by clinicians. These models offer transparency in decision-making, allowing clinicians to understand reasons behind diagnostic decisions, which is critical for trust and adoption in medical settings. In addition, interpretable classification algorithms facilitate the identification of important behavioural features and patterns associated with ASD, enabling more accurate and explainable diagnoses. However, there is a scarcity of review papers focusing on interpretable classifiers for ASD detection from a behavioural perspective. Thereby this research aimed to conduct a recent review on rule-based classification research works in order to provide added value by consolidating current research, identifying gaps, and guiding future studies. Our research would enhance the understanding of these techniques, based on data used to generate models and obtain performance by trying to highlight early detection and intervention ways for ASD. Integrating advanced AI methods like deep learning with rule-based classifiers can improve model interpretability, exploration, and accuracy in ASD-detection applications. While this hybrid approach has feature selection relevant features that can be detected in an efficient manner, rule-based classifiers can provide clinicians with transparent explanations for model decisions. This hybrid approach is critical in clinical applications like ASD, where model content is as crucial as achieving high classification accuracy. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

22 pages, 872 KB  
Article
The Walk of Guilt: Multimodal Deception Detection from Nonverbal Motion Behaviour
by Sharifa Alghowinem, Sabrina Caldwell, Ibrahim Radwan, Michael Wagner and Tom Gedeon
Information 2025, 16(1), 6; https://doi.org/10.3390/info16010006 - 26 Dec 2024
Viewed by 2173
Abstract
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from [...] Read more.
Detecting deceptive behaviour for surveillance and border protection is critical for a country’s security. With the advancement of technology in relation to sensors and artificial intelligence, recognising deceptive behaviour could be performed automatically. Following the success of affective computing in emotion recognition from verbal and nonverbal cues, we aim to apply a similar concept for deception detection. Recognising deceptive behaviour has been attempted; however, only a few studies have analysed this behaviour from gait and body movement. This research involves a multimodal approach for deception detection from gait, where we fuse features extracted from body movement behaviours from a video signal, acoustic features from walking steps from an audio signal, and the dynamics of walking movement using an accelerometer sensor. Using the video recording of walking from the Whodunnit deception dataset, which contains 49 subjects performing scenarios that elicit deceptive behaviour, we conduct multimodal two-category (guilty/not guilty) subject-independent classification. The classification results obtained reached an accuracy of up to 88% through feature fusion, with an average of 60% from both single and multimodal signals. Analysing body movement using single modality showed that the visual signal had the highest performance followed by the accelerometer and acoustic signals. Several fusion techniques were explored, including early, late, and hybrid fusion, where hybrid fusion not only achieved the highest classification results, but also increased the confidence of the results. Moreover, using a systematic framework for selecting the most distinguishing features of guilty gait behaviour, we were able to interpret the performance of our models. From these baseline results, we can conclude that pattern recognition techniques could help in characterising deceptive behaviour, where future work will focus on exploring the tuning and enhancement of the results and techniques. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop