Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = individual sample learning entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 3358 KB  
Article
Adaptive Event-Driven Labeling: Multi-Scale Causal Framework with Meta-Learning for Financial Time Series
by Amine Kili, Brahim Raouyane, Mohamed Rachdi and Mostafa Bellafkih
Appl. Sci. 2025, 15(24), 13204; https://doi.org/10.3390/app152413204 - 17 Dec 2025
Viewed by 468
Abstract
Financial time-series labeling remains fundamentally limited by three critical deficiencies: temporal rigidity (fixed horizons regardless of market conditions), scale blindness (single-resolution analysis), and correlation-causation conflation. These limitations cause systematic failure during regime shifts. We introduce Adaptive Event-Driven Labeling (AEDL), integrating three core innovations: [...] Read more.
Financial time-series labeling remains fundamentally limited by three critical deficiencies: temporal rigidity (fixed horizons regardless of market conditions), scale blindness (single-resolution analysis), and correlation-causation conflation. These limitations cause systematic failure during regime shifts. We introduce Adaptive Event-Driven Labeling (AEDL), integrating three core innovations: (1) multi-scale temporal analysis capturing hierarchical market patterns across five time resolutions, (2) causal inference using Granger causality and transfer entropy to filter spurious correlations, and (3) model-agnostic meta-learning (MAML) for adaptive parameter optimization. The framework outputs calibrated probability distributions enabling uncertainty-aware trading strategies. Evaluation on 16 assets spanning 25 years (2000–2025) with rigorous out-of-sample validation demonstrates substantial improvements: AEDL achieves average Sharpe ratio of 0.48 (across all models and assets) while baseline methods average near-zero or negative (Fixed Horizon: −0.29, Triple Barrier: −0.03, Trend Scanning: 0.00). Systematic ablation experiments on a 12-asset subset reveal that selective innovation deployment outperforms both minimal baselines and maximal integration: removing causal inference improves performance to 0.65 Sharpe while maintaining full asset coverage (12/12), whereas adding attention mechanisms reduces applicability to 2/12 assets due to compound filtering effects. These findings demonstrate that judicious component selection outperforms kitchen-sink approaches, with peak individual asset performance exceeding 3.0 Sharpe. Wilcoxon tests confirm statistically significant improvements over Fixed Horizon baseline (p = 0.0024). Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 2068 KB  
Article
Voice-Based Early Diagnosis of Parkinson’s Disease Using Spectrogram Features and AI Models
by Danish Quamar, V. D. Ambeth Kumar, Muhammad Rizwan, Ovidiu Bagdasar and Manuella Kadar
Bioengineering 2025, 12(10), 1052; https://doi.org/10.3390/bioengineering12101052 - 29 Sep 2025
Viewed by 2469
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that significantly affects motor functions, including speech production. Voice analysis offers a less invasive, faster and more cost-effective approach for diagnosing and monitoring PD over time. This research introduces an automated system to distinguish between PD and non-PD individuals based on speech signals using state-of-the-art signal processing and machine learning (ML) methods. A publicly available voice dataset (Dataset 1, 81 samples) containing speech recordings from PD patients and non-PD individuals was used for model training and evaluation. Additionally, a small supplementary dataset (Dataset 2, 15 samples) was created although excluded from experiment, to illustrate potential future extensions of this work. Features such as Mel-frequency cepstral coefficients (MFCCs), spectrograms, Mel spectrograms and waveform representations were extracted to capture key vocal impairments related to PD, including diminished vocal range, weak harmonics, elevated spectral entropy and impaired formant structures. These extracted features were used to train and evaluate several ML models, including support vector machine (SVM), XGBoost and logistic regression, as well as deep learning (DL)architectures such as deep neural networks (DNN), convolutional neural networks (CNN) combined with long short-term memory (LSTM), CNN + gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM). Experimental results show that DL models, particularly BiLSTM, outperform traditional ML models, achieving 97% accuracy and an AUC of 0.95. The comprehensive feature extraction from both datasets enabled robust classification of PD and non-PD speech signals. These findings highlight the potential of integrating acoustic features with DL methods for early diagnosis and monitoring of Parkinson’s Disease. Full article
Show Figures

Figure 1

29 pages, 12228 KB  
Article
Conditional Domain Adaptation with α-Rényi Entropy Regularization and Noise-Aware Label Weighting
by Diego Armando Pérez-Rosero, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Mathematics 2025, 13(16), 2602; https://doi.org/10.3390/math13162602 - 14 Aug 2025
Viewed by 1532
Abstract
Domain adaptation is a key approach to ensure that artificial intelligence models maintain reliable performance when facing distributional shifts between training (source) and testing (target) domains. However, existing methods often struggle to simultaneously preserve domain-invariant representations and discriminative class structures, particularly in the [...] Read more.
Domain adaptation is a key approach to ensure that artificial intelligence models maintain reliable performance when facing distributional shifts between training (source) and testing (target) domains. However, existing methods often struggle to simultaneously preserve domain-invariant representations and discriminative class structures, particularly in the presence of complex covariate shifts and noisy pseudo-labels in the target domain. In this work, we introduce Conditional Rényi α-Entropy Domain Adaptation, named CREDA, a novel deep learning framework for domain adaptation that integrates kernel-based conditional alignment with a differentiable, matrix-based formulation of Rényi’s quadratic entropy. The proposed method comprises three main components: (i) a deep feature extractor that learns domain-invariant representations from labeled source and unlabeled target data; (ii) an entropy-weighted approach that down-weights low-confidence pseudo-labels, enhancing stability in uncertain regions; and (iii) a class-conditional alignment loss, formulated as a Rényi-based entropy kernel estimator, that enforces semantic consistency in the latent space. We validate CREDA on standard benchmark datasets for image classification, including Digits, ImageCLEF-DA, and Office-31, showing competitive performance against both classical and deep learning-based approaches. Furthermore, we employ nonlinear dimensionality reduction and class activation maps visualizations to provide interpretability, revealing meaningful alignment in feature space and offering insights into the relevance of individual samples and attributes. Experimental results confirm that CREDA improves cross-domain generalization while promoting accuracy, robustness, and interpretability. Full article
Show Figures

Figure 1

27 pages, 4682 KB  
Article
DERIENet: A Deep Ensemble Learning Approach for High-Performance Detection of Jute Leaf Diseases
by Mst. Tanbin Yasmin Tanny, Tangina Sultana, Md. Emran Biswas, Chanchol Kumar Modok, Arjina Akter, Mohammad Shorif Uddin and Md. Delowar Hossain
Information 2025, 16(8), 638; https://doi.org/10.3390/info16080638 - 27 Jul 2025
Cited by 1 | Viewed by 1010
Abstract
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability [...] Read more.
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability across geographically distributed agrarian systems. To transcend these limitations, we propose DERIENet, a robust and scalable classification approach within a deep ensemble learning framework. It is meticulously engineered by integrating three high-performing convolutional neural networks—ResNet50, InceptionV3, and EfficientNetB0—along with regularization, batch normalization, and dropout strategies, to accurately classify jute leaf diseases such as Cercospora Leaf Spot, Golden Mosaic Virus, and healthy leaves. A key methodological contribution is the design of a novel augmentation pipeline, termed Geometric Localized Occlusion and Adaptive Rescaling (GLOAR), which dynamically modulates photometric and geometric distortions based on image entropy and luminance to synthetically upscale a limited dataset (920 images) into a significantly enriched and diverse dataset of 7800 samples, thereby mitigating overfitting and enhancing domain generalizability. Empirical evaluation, utilizing a comprehensive set of performance metrics—accuracy, precision, recall, F1-score, confusion matrices, and ROC curves—demonstrates that DERIENet achieves a state-of-the-art classification accuracy of 99.89%, with macro-averaged and weighted average precision, recall, and F1-score uniformly at 99.89%, and an AUC of 1.0 across all disease categories. The reliability of the model is validated by the confusion matrix, which shows that 899 out of 900 test images were correctly identified and that there was only one misclassification. Comparative evaluations of the various ensemble baselines, such as DenseNet201, MobileNetV2, and VGG16, and individual base learners demonstrate that DERIENet performs noticeably superior to all baseline models. It provides a highly interpretable, deployment-ready, and computationally efficient architecture that is ideal for integrating into edge or mobile platforms to facilitate in situ, real-time disease diagnostics in precision agriculture. Full article
Show Figures

Figure 1

19 pages, 1039 KB  
Article
Prediction of Parkinson Disease Using Long-Term, Short-Term Acoustic Features Based on Machine Learning
by Mehdi Rashidi, Serena Arima, Andrea Claudio Stetco, Chiara Coppola, Debora Musarò, Marco Greco, Marina Damato, Filomena My, Angela Lupo, Marta Lorenzo, Antonio Danieli, Giuseppe Maruccio, Alberto Argentiero, Andrea Buccoliero, Marcello Dorian Donzella and Michele Maffia
Brain Sci. 2025, 15(7), 739; https://doi.org/10.3390/brainsci15070739 - 10 Jul 2025
Cited by 2 | Viewed by 1916
Abstract
Background: Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease, affecting countless individuals worldwide. PD is characterized by the onset of a marked motor symptomatology in association with several non-motor manifestations. The clinical phase of the disease is usually [...] Read more.
Background: Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease, affecting countless individuals worldwide. PD is characterized by the onset of a marked motor symptomatology in association with several non-motor manifestations. The clinical phase of the disease is usually preceded by a long prodromal phase, devoid of overt motor symptomatology but often showing some conditions such as sleep disturbance, constipation, anosmia, and phonatory changes. To date, speech analysis appears to be a promising digital biomarker to anticipate even 10 years before the onset of clinical PD, as well serving as a useful prognostic tool for patient follow-up. That is why, the voice can be nominated as the non-invasive method to detect PD from healthy subjects (HS). Methods: Our study was based on cross-sectional study to analysis voice impairment. A dataset comprising 81 voice samples (41 from healthy individuals and 40 from PD patients) was utilized to train and evaluate common machine learning (ML) models using various types of features, including long-term (jitter, shimmer, and cepstral peak prominence (CPP)), short-term features (Mel-frequency cepstral coefficient (MFCC)), and non-standard measurements (pitch period entropy (PPE) and recurrence period density entropy (RPDE)). The study adopted multiple machine learning (ML) algorithms, including random forest (RF), K-nearest neighbors (KNN), decision tree (DT), naïve Bayes (NB), support vector machines (SVM), and logistic regression (LR). Cross-validation technique was applied to ensure the reliability of performance metrics on train and test subsets. These metrics (accuracy, recall, and precision), help determine the most effective models for distinguishing PD from healthy subjects. Result: Among all the algorithms used in this research, random forest (RF) was the best-performing model, achieving an accuracy of 82.72% with a ROC-AUC score of 89.65%. Although other models, such as support vector machine (SVM), could be considered with an accuracy of 75.29% and a ROC-AUC score of 82.63%, RF was by far the best one when evaluated across all metrics. The K-nearest neighbor (KNN) and decision tree (DT) performed the worst. Notably, by combining a comprehensive set of long-term, short-term, and non-standard acoustic features, unlike previous studies that typically focused on only a subset, our study achieved higher predictive performance, offering a more robust model for early PD detection. Conclusions: This study highlights the potential of combining advanced acoustic analysis with ML algorithms to develop non-invasive and reliable tools for early PD detection, offering substantial benefits for the healthcare sector. Full article
(This article belongs to the Section Neurodegenerative Diseases)
Show Figures

Figure 1

19 pages, 3196 KB  
Article
Detection and Severity Classification of Sleep Apnea Using Continuous Wearable SpO2 Signals: A Multi-Scale Feature Approach
by Nhung H. Hoang and Zilu Liang
Sensors 2025, 25(6), 1698; https://doi.org/10.3390/s25061698 - 9 Mar 2025
Cited by 3 | Viewed by 5387
Abstract
The use of wearable devices for sleep apnea detection is growing, but their limited signal resolution poses challenges for accurate diagnosis. This study explores the feasibility of using SpO2 signals from wearable sensors for detecting sleep apnea and classifying its severity. We [...] Read more.
The use of wearable devices for sleep apnea detection is growing, but their limited signal resolution poses challenges for accurate diagnosis. This study explores the feasibility of using SpO2 signals from wearable sensors for detecting sleep apnea and classifying its severity. We propose a novel multi-scale feature engineering approach, which extracts features from coarsely grained SpO2 signals across timescales ranging from 1 s to 600 s. Our results show that traditional SpO2 markers, such as the oxygen desaturation index (ODI) and Lempel–Zip complexity, lose their relevance with the Apnea–Hypopnea Index (AHI) at longer timescales. In contrast, non-linear features like complex entropy, sample entropy, and fuzzy entropy maintain strong correlations with AHI, even at the coarsest timescales (up to 600 s), making them well suited for low-resolution data. Multi-scale feature extraction improves model performance across various machine learning algorithms by alleviating model bias, particularly with the Bayes and CatBoost models. These findings highlight the potential of multi-scale feature engineering for wearable device applications where only low-resolution data are commonly available. This could improve accessibility to low-cost, at-home sleep apnea screening, reducing reliance on expensive and labor-intensive polysomnography. Moreover, it would allow even healthy individuals to proactively monitor their sleep health at home, facilitating the early identification of potential sleep problems. Full article
(This article belongs to the Special Issue Wearable Sensors for Continuous Health Monitoring and Analysis)
Show Figures

Figure 1

21 pages, 14179 KB  
Article
Surface Electromyography Monitoring of Muscle Changes in Male Basketball Players During Isotonic Training
by Ziyang Li, Bowen Zhang, Hong Wang and Mohamed Amin Gouda
Sensors 2025, 25(5), 1355; https://doi.org/10.3390/s25051355 - 22 Feb 2025
Viewed by 1465
Abstract
Physiological indicators are increasingly employed in sports training. However, studies on surface electromyography (sEMG) primarily focus on the analysis of isometric contraction. Research on sEMG related to isotonic contraction, which is more relevant to athletic performance, remains relatively limited. This paper examines the [...] Read more.
Physiological indicators are increasingly employed in sports training. However, studies on surface electromyography (sEMG) primarily focus on the analysis of isometric contraction. Research on sEMG related to isotonic contraction, which is more relevant to athletic performance, remains relatively limited. This paper examines the changes in the isotonic contraction performance of the male upper arm muscles resulting from long-term basketball training using the sEMG metrics. We recruited basketball physical education (B-PE) and non-PE majors to conduct a controlled isotonic contraction experiment to collect and analyze sEMG signals. The sample entropy event detection method was utilized to extract the epochs of active segments of data. Subsequently, statistical analysis methods were applied to extract the key sEMG time domain (TD) and frequency domain (FD) features of isotonic contraction that can differentiate between professional and amateur athletes. Machine learning methods were employed to perform ten-fold cross-validation and repeated experiments to verify the effectiveness of the features across the different groups. This paper investigates the key features and channels of interest for categorizing male participants from non-PE and B-PE backgrounds. The experimental results show that the F12B feature group consistently achieved an accuracy of between 80% and 90% with the SVM2 model, balancing both accuracy and efficiency, which can serve as evaluation indices for isotonic contraction performance of upper limb muscles during basketball training. This has practical significance for monitoring isotonic sEMG features in sports and training, as well as for providing individualized training regimens. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

24 pages, 3261 KB  
Article
A Video-Based Cognitive Emotion Recognition Method Using an Active Learning Algorithm Based on Complexity and Uncertainty
by Hongduo Wu, Dong Zhou, Ziyue Guo, Zicheng Song, Yu Li, Xingzheng Wei and Qidi Zhou
Appl. Sci. 2025, 15(1), 462; https://doi.org/10.3390/app15010462 - 6 Jan 2025
Cited by 1 | Viewed by 4256
Abstract
The cognitive emotions of individuals during tasks largely determine the success or failure of tasks in various fields such as the military, medical, industrial fields, etc. Facial video data can carry more emotional information than static images because emotional expression is a temporal [...] Read more.
The cognitive emotions of individuals during tasks largely determine the success or failure of tasks in various fields such as the military, medical, industrial fields, etc. Facial video data can carry more emotional information than static images because emotional expression is a temporal process. Video-based Facial Expression Recognition (FER) has received increasing attention from the relevant scholars in recent years. However, due to the high cost of marking and training video samples, feature extraction is inefficient and ineffective, which leads to a low accuracy and poor real-time performance. In this paper, a cognitive emotion recognition method based on video data is proposed, in which 49 emotion description points were initially defined, and the spatial–temporal features of cognitive emotions were extracted from the video data through a feature extraction method that combines geodesic distances and sample entropy. Then, an active learning algorithm based on complexity and uncertainty was proposed to automatically select the most valuable samples, thereby reducing the cost of sample labeling and model training. Finally, the effectiveness, superiority, and real-time performance of the proposed method were verified utilizing the MMI Facial Expression Database and some real-time-collected data. Through comparisons and testing, the proposed method showed satisfactory real-time performance and a higher accuracy, which can effectively support the development of a real-time monitoring system for cognitive emotions. Full article
(This article belongs to the Special Issue Advanced Technologies and Applications of Emotion Recognition)
Show Figures

Figure 1

16 pages, 673 KB  
Article
Data-Driven Identification of Stroke through Machine Learning Applied to Complexity Metrics in Multimodal Electromyography and Kinematics
by Francesco Romano, Damiano Formenti, Daniela Cardone, Emanuele Francesco Russo, Paolo Castiglioni, Giampiero Merati, Arcangelo Merla and David Perpetuini
Entropy 2024, 26(7), 578; https://doi.org/10.3390/e26070578 - 7 Jul 2024
Cited by 4 | Viewed by 2485
Abstract
A stroke represents a significant medical condition characterized by the sudden interruption of blood flow to the brain, leading to cellular damage or death. The impact of stroke on individuals can vary from mild impairments to severe disability. Treatment for stroke often focuses [...] Read more.
A stroke represents a significant medical condition characterized by the sudden interruption of blood flow to the brain, leading to cellular damage or death. The impact of stroke on individuals can vary from mild impairments to severe disability. Treatment for stroke often focuses on gait rehabilitation. Notably, assessing muscle activation and kinematics patterns using electromyography (EMG) and stereophotogrammetry, respectively, during walking can provide information regarding pathological gait conditions. The concurrent measurement of EMG and kinematics can help in understanding disfunction in the contribution of specific muscles to different phases of gait. To this aim, complexity metrics (e.g., sample entropy; approximate entropy; spectral entropy) applied to EMG and kinematics have been demonstrated to be effective in identifying abnormal conditions. Moreover, the conditional entropy between EMG and kinematics can identify the relationship between gait data and muscle activation patterns. This study aims to utilize several machine learning classifiers to distinguish individuals with stroke from healthy controls based on kinematics and EMG complexity measures. The cubic support vector machine applied to EMG metrics delivered the best classification results reaching 99.85% of accuracy. This method could assist clinicians in monitoring the recovery of motor impairments for stroke patients. Full article
(This article belongs to the Special Issue Entropy and Information in Biological Systems)
Show Figures

Figure 1

22 pages, 1728 KB  
Article
Ensemble Transductive Propagation Network for Semi-Supervised Few-Shot Learning
by Xueling Pan, Guohe Li and Yifeng Zheng
Entropy 2024, 26(2), 135; https://doi.org/10.3390/e26020135 - 31 Jan 2024
Cited by 6 | Viewed by 2304
Abstract
Few-shot learning aims to solve the difficulty in obtaining training samples, leading to high variance, high bias, and over-fitting. Recently, graph-based transductive few-shot learning approaches supplement the deficiency of label information via unlabeled data to make a joint prediction, which has become a [...] Read more.
Few-shot learning aims to solve the difficulty in obtaining training samples, leading to high variance, high bias, and over-fitting. Recently, graph-based transductive few-shot learning approaches supplement the deficiency of label information via unlabeled data to make a joint prediction, which has become a new research hotspot. Therefore, in this paper, we propose a novel ensemble semi-supervised few-shot learning strategy via transductive network and Dempster–Shafer (D-S) evidence fusion, named ensemble transductive propagation networks (ETPN). First, we present homogeneity and heterogeneity ensemble transductive propagation networks to better use the unlabeled data, which introduce a preset weight coefficient and provide the process of iterative inferences during transductive propagation learning. Then, we combine the information entropy to improve the D-S evidence fusion method, which improves the stability of multi-model results fusion from the pre-processing of the evidence source. Third, we combine the L2 norm to improve an ensemble pruning approach to select individual learners with higher accuracy to participate in the integration of the few-shot model results. Moreover, interference sets are introduced to semi-supervised training to improve the anti-disturbance ability of the mode. Eventually, experiments indicate that the proposed approaches outperform the state-of-the-art few-shot model. The best accuracy of ETPN increases by 0.3% and 0.28% in the 5-way 5-shot, and by 3.43% and 7.6% in the 5-way 1-shot on miniImagNet and tieredImageNet, respectively. Full article
Show Figures

Figure 1

16 pages, 2756 KB  
Protocol
Methodology and Experimental Protocol for Studying Learning and Motor Control in Neuromuscular Structures in Pilates
by Mário José Pereira, Alexandra André, Mário Monteiro, Maria António Castro, Rui Mendes, Fernando Martins, Ricardo Gomes, Vasco Vaz and Gonçalo Dias
Healthcare 2024, 12(2), 229; https://doi.org/10.3390/healthcare12020229 - 17 Jan 2024
Cited by 3 | Viewed by 3517
Abstract
The benefits of Pilates have been extensively researched for their impact on muscular, psychological, and cardiac health, as well as body composition, among other aspects. This study aims to investigate the influence of the Pilates method on the learning process, motor control, and [...] Read more.
The benefits of Pilates have been extensively researched for their impact on muscular, psychological, and cardiac health, as well as body composition, among other aspects. This study aims to investigate the influence of the Pilates method on the learning process, motor control, and neuromuscular trunk stabilization, specifically in both experienced and inexperienced practitioners. This semi-randomized controlled trial compares the level of experience among 36 Pilates practitioners in terms of motor control and learning of two Pilates-based skills: standing plank and side crisscross. Data will be collected using various assessment methods, including abdominal wall muscle ultrasound (AWMUS), shear wave elastography (SWE), gaze behavior (GA) assessment, electroencephalography (EEG), and video motion. Significant intra- and inter-individual variations are expected, due to the diverse morphological and psychomotor profiles in the sample. The adoption of both linear and non-linear analyses will provide a comprehensive evaluation of how neuromuscular structures evolve over time and space, offering both quantitative and qualitative insights. Non-linear analysis is expected to reveal higher entropy in the expert group compared to non-experts, signifying greater complexity in their motor control. In terms of stability, experts are likely to exhibit higher Lyapunov exponent values, indicating enhanced stability and coordination, along with lower Hurst exponent values. In elastography, experienced practitioners are expected to display higher transversus abdominis (TrA) muscle elasticity, due to their proficiency. Concerning GA, non-experts are expected to demonstrate more saccades, focus on more Areas of Interest (AOIs), and shorter fixation times, as experts are presumed to have more efficient gaze control. In EEG, we anticipate higher theta wave values in the non-expert group compared to the expert group. These expectations draw from similar studies in elastography and correlated research in eye tracking and EEG. They are consistent with the principles of the Pilates Method and other scientific knowledge in related techniques. Full article
Show Figures

Figure 1

17 pages, 7121 KB  
Article
Dynamic Condition Adversarial Adaptation for Fault Diagnosis of Wind Turbine Gearbox
by Hongpeng Zhang, Xinran Wang, Cunyou Zhang, Wei Li, Jizhe Wang, Guobin Li and Chenzhao Bai
Sensors 2023, 23(23), 9368; https://doi.org/10.3390/s23239368 - 23 Nov 2023
Cited by 9 | Viewed by 2051
Abstract
While deep learning has found widespread utility in gearbox fault diagnosis, its direct application to wind turbine gearboxes encounters significant hurdles. Disparities in data distribution across a spectrum of operating conditions for wind turbines result in a marked decrease in diagnostic accuracy. In [...] Read more.
While deep learning has found widespread utility in gearbox fault diagnosis, its direct application to wind turbine gearboxes encounters significant hurdles. Disparities in data distribution across a spectrum of operating conditions for wind turbines result in a marked decrease in diagnostic accuracy. In response, this study introduces a tailored dynamic conditional adversarial domain adaptation model for fault diagnosis in wind turbine gearboxes amidst cross-condition scenarios. The model adeptly adjusts the importance of aligning marginal and conditional distributions using distance metric factors. Information entropy parameters are also incorporated to assess individual sample transferability, prioritizing highly transferable samples during domain alignment. The amalgamation of these dynamic factors empowers the approach to maintain stability across varied data distributions. Comprehensive experiments on both gear and bearing data validate the method’s efficacy in cross-condition fault diagnosis. Comparative outcomes demonstrate that, when contrasted with four advanced transfer learning techniques, the dynamic conditional adversarial domain adaptation model attains superior accuracy and stability in multi-transfer tasks, making it notably suitable for diagnosing wind turbine gearbox faults. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

15 pages, 1683 KB  
Article
Romantic Transfer from Thermodynamic Theories to Personal Theories of Social Control: A Randomised Controlled Experiment
by Chen Chen, Si Chen, Helen Haste, Robert L. Selman and Matthew H. Schneps
Educ. Sci. 2023, 13(6), 599; https://doi.org/10.3390/educsci13060599 - 13 Jun 2023
Viewed by 1882
Abstract
The transfer of learning is arguably the most enduring goal of education. The history of science reveals that although numerous theories have been transferred from the natural sciences to the socio-political realm, educational practitioners have often deemed such transfers romantic and rhetorical. We [...] Read more.
The transfer of learning is arguably the most enduring goal of education. The history of science reveals that although numerous theories have been transferred from the natural sciences to the socio-political realm, educational practitioners have often deemed such transfers romantic and rhetorical. We conducted an experiment that randomly assigned a sample of 292 college freshmen in China to two groups to learn different thermodynamic theories: entropy or self-organization theory. We examined whether the two groups may arrive at different implications about social (and government) control without explicit instructions. We found that participants who learned the theory of entropy were more likely to believe the social system would become chaotic over time without external control; thus, they preferred tightened social control. Whereas participants who learned self-organisation theory were more likely to believe that order may form from within a social system; therefore, they downplay external control and prefer stronger individual agency. Follow-up interviews showed that the participants’ narratives about social control were largely consistent with the thermodynamic concepts they had learned. Our findings have critical implications for the recent trend in STEM education that promotes the teaching of cross-cutting concepts—seeking patterns from interdisciplinary ideas—that may implicitly prime students to borrow physical science theories to formulate personal social hypotheses and engage in moral–civic–political discourse. Full article
(This article belongs to the Section STEM Education)
Show Figures

Figure 1

19 pages, 13663 KB  
Article
Chained Deep Learning Using Generalized Cross-Entropy for Multiple Annotators Classification
by Jenniffer Carolina Triana-Martinez, Julian Gil-González, Jose A. Fernandez-Gallego, Andrés Marino Álvarez-Meza and Cesar German Castellanos-Dominguez
Sensors 2023, 23(7), 3518; https://doi.org/10.3390/s23073518 - 28 Mar 2023
Cited by 7 | Viewed by 2849
Abstract
Supervised learning requires the accurate labeling of instances, usually provided by an expert. Crowdsourcing platforms offer a practical and cost-effective alternative for large datasets when individual annotation is impractical. In addition, these platforms gather labels from multiple labelers. Still, traditional multiple-annotator methods must [...] Read more.
Supervised learning requires the accurate labeling of instances, usually provided by an expert. Crowdsourcing platforms offer a practical and cost-effective alternative for large datasets when individual annotation is impractical. In addition, these platforms gather labels from multiple labelers. Still, traditional multiple-annotator methods must account for the varying levels of expertise and the noise introduced by unreliable outputs, resulting in decreased performance. In addition, they assume a homogeneous behavior of the labelers across the input feature space, and independence constraints are imposed on outputs. We propose a Generalized Cross-Entropy-based framework using Chained Deep Learning (GCECDL) to code each annotator’s non-stationary patterns regarding the input space while preserving the inter-dependencies among experts through a chained deep learning approach. Experimental results devoted to multiple-annotator classification tasks on several well-known datasets demonstrate that our GCECDL can achieve robust predictive properties, outperforming state-of-the-art algorithms by combining the power of deep learning with a noise-robust loss function to deal with noisy labels. Moreover, network self-regularization is achieved by estimating each labeler’s reliability within the chained approach. Lastly, visual inspection and relevance analysis experiments are conducted to reveal the non-stationary coding of our method. In a nutshell, GCEDL weights reliable labelers as a function of each input sample and achieves suitable discrimination performance with preserved interpretability regarding each annotator’s trustworthiness estimation. Full article
(This article belongs to the Special Issue Deep Learning for Information Fusion and Pattern Recognition)
Show Figures

Figure 1

10 pages, 2639 KB  
Article
Individuals’ Behaviors of Cone Production in Longleaf Pine Trees
by Xiongwen Chen and John L. Willis
Forests 2023, 14(3), 494; https://doi.org/10.3390/f14030494 - 2 Mar 2023
Cited by 6 | Viewed by 2206
Abstract
The sporadic cone production of longleaf pine (Pinus palustris Mill.) challenges the restoration of the longleaf pine ecosystem. While much has been learned about longleaf pine cone production at the stand level, little information exists at the tree level regarding cone production [...] Read more.
The sporadic cone production of longleaf pine (Pinus palustris Mill.) challenges the restoration of the longleaf pine ecosystem. While much has been learned about longleaf pine cone production at the stand level, little information exists at the tree level regarding cone production and energy allocational strategy. This study aims to analyze cone production and diameter growth of approximately ten sampled longleaf pine trees at seven sites across the southeastern USA over the past twenty years. The results indicate that three-year cycles dominated the cone production dynamics, but longer cycles (four years and more) also occurred. The dynamics of entropy in cone production varied among trees. Taylor’s law, which describes the correlation between average and variance, existed in cone production for the majority of trees. Lagged cone production at one and two years was not autocorrelated among trees across sites. No significant relationships existed between tree diameter (or basal area) growth and cone production among trees across sites. This study provides new information on cone production at the individual tree level and narrows down the possible mechanisms. The results will be helpful in developing strategies for the management and modeling of longleaf pine cone production. Full article
(This article belongs to the Special Issue Longleaf Pine Ecology, Restoration, and Management)
Show Figures

Figure 1

Back to TopTop