Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (203)

Search Parameters:
Keywords = generalized set pair analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 481 KB  
Article
A Conceptual Framework for a Morphological Scenario Library and Playbook Mapping in Cognitive Warfare Defense
by Dojin Ryu
J. Cybersecur. Priv. 2026, 6(2), 46; https://doi.org/10.3390/jcp6020046 - 3 Mar 2026
Abstract
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents [...] Read more.
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents a conceptual framework that structures cognitive warfare threats with General Morphological Analysis (GMA) and links plausible configurations to indicator profiles and response playbooks. We first conduct a PRISMA-informed literature review (2018–2025) to derive a five-dimensional taxonomy (actor, tactic, medium, target, objective). We then apply cross-consistency assessment to remove implausible state-pair combinations and obtain a reduced library of internally consistent scenarios. To support analyst-guided triage, we outline an AI-enabled workflow that maps observable signals to taxonomy states, matches events to scenarios, and prioritizes responses via an auditable, policy-set risk score. Finally, we illustrate the framework on three publicly documented cases and show how each case maps to scenario vectors, indicators, and playbooks. No end-to-end system implementation or performance metrics are reported; the contribution is the structured scenario library and the traceable mapping from observations to response guidance. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

26 pages, 447 KB  
Article
Solvability of Multidimensional Integral Inclusion Systems via a Common Fixed Point Approach for 𝕄α-Admissible Multivalued Operators
by Pari Amiri
Axioms 2026, 15(3), 163; https://doi.org/10.3390/axioms15030163 - 26 Feb 2026
Viewed by 103
Abstract
Integral inclusion systems play a significant role in applied analysis and modeling, providing an effective framework for studying various physical, engineering, and dynamical processes. In this work, the solvability of a multidimensional integral inclusion system is investigated by applying the common fixed point [...] Read more.
Integral inclusion systems play a significant role in applied analysis and modeling, providing an effective framework for studying various physical, engineering, and dynamical processes. In this work, the solvability of a multidimensional integral inclusion system is investigated by applying the common fixed point technique to a pair of Mα-admissible multivalued operators. The analysis is carried out within a novel double-controlled vector-valued metric structure, in which the distance is governed by two independent matrix-valued control operators; this setting strictly extends classical Perov-type and b-metric frameworks and offers a more flexible tool for treating multidimensional and interdependent systems. Existence results are derived under a suitable contractive condition within a generalized metric structure. Several auxiliary theorems are established to support the main conclusions. To illustrate the applicability of the theoretical findings, the obtained results are utilized to ensure the existence of solutions for a multidimensional Urysohn-type integral inclusion system. A simple example demonstrates the validity of the theoretical framework and highlights the effectiveness of the adopted approach. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

23 pages, 6070 KB  
Article
Test–Retest Reliability and Validity of a Sums-of-Gaussians-Based Markerless Motion Capture System for Human Lower-Limb Gait Kinematics
by Yifei Shou, Chuang Gao, Chenbin Xi, Junqi Jia, Jiaojiao Lü, Yufei Fang, Chengte Lin and Zhiqiang Liang
Bioengineering 2026, 13(3), 271; https://doi.org/10.3390/bioengineering13030271 - 26 Feb 2026
Viewed by 115
Abstract
Background and aim: Traditional marker-based optical motion capture systems are costly, time-consuming to operate, and constrained by laboratory environments, limiting their broader adoption in clinical practice and naturalistic settings. Markerless motion capture based on a sums-of-Gaussians (SoG) body model is a potential alternative; [...] Read more.
Background and aim: Traditional marker-based optical motion capture systems are costly, time-consuming to operate, and constrained by laboratory environments, limiting their broader adoption in clinical practice and naturalistic settings. Markerless motion capture based on a sums-of-Gaussians (SoG) body model is a potential alternative; however, its metrological properties for kinematic assessment during walking and slow running remain insufficiently validated. Using a conventional marker-based Vicon system as the reference, this study evaluated the reliability and concurrent validity of an SoG-based markerless system (MocapGS) for bilateral lower-limb joint range of motion (ROM) during gait. Methods: Thirty-six healthy adults completed self-selected-pace speed walking and slow running tasks while both systems synchronously acquired bilateral lower-limb kinematics. The intraclass correlation coefficient (ICC), standard error of measurement (SEM), SEM percentage (SEM%), minimal detectable change (MDC), MDC percentage (MDC%), and root mean square error (RMSE) were used to assess reliability. Concurrent validity was evaluated using the Pearson correlation coefficient, paired-sample t-tests, and the concordance correlation coefficient (CCC) to compare the ROM. Results: Vicon showed moderate-to-high reliability for ROM in most joints across both tasks. By contrast, the MocapGS achieved acceptable ICC values mainly for the sagittal-plane ROM at the hip and knee. The CCC analysis showed no significant agreement between the two systems. Bland–Altman plots showed systematic biases with spatially heterogeneous random errors. During walking, MocapGS systematically overestimated ROM relative to Vicon at several joint axes; the widest limits of agreement (LOA) occurred at the left knee X-axis and right hip Z-axis. During running, overestimation was consistent across all bilateral joints at the X-axis and the right hip at the Y-axis, while the widest LOA were found at the bilateral hip X-axes. These specific discrepancies highlighted the joint–axis combinations with the greatest measurement variance. In walking, the test–retest reliability of the knee flexion–extension ROM measured by the MocapGS approached that of Vicon; however, the SEM% and MDC% were generally larger for MocapGS than for Vicon. The RMSE exceeded 5 degrees for ROM in most joint planes, especially in the frontal and transverse planes and at distal joints; errors increased further during slow running. Conclusions: MocapGS may be used for coarse monitoring of large-magnitude changes in sagittal-plane kinematics during gait; however, it is currently unlikely to replace Vicon for clinical decision-making or detecting subtle gait changes, and its outputs should be interpreted with caution, particularly for ankle kinematics and non-sagittal-plane motion. Full article
Show Figures

Figure 1

36 pages, 997 KB  
Article
Genetic Algorithms for Pareto Optimization in Bayesian Cournot Games Under Incomplete Cost Information
by David Carfí, Alessia Donato and Emanuele Perrone
Mathematics 2026, 14(5), 762; https://doi.org/10.3390/math14050762 - 25 Feb 2026
Viewed by 239
Abstract
This paper develops a practical computational framework for the Bayesian Cournot model with bilateral incomplete cost information, where each player is uncertain about the opponent’s marginal cost, drawn from a continuous compact interval [c*, c*] with [...] Read more.
This paper develops a practical computational framework for the Bayesian Cournot model with bilateral incomplete cost information, where each player is uncertain about the opponent’s marginal cost, drawn from a continuous compact interval [c*, c*] with 0<c*<c*<. The infinite dimensionality of the functional strategy spaces (mappings from types to production quantities) renders analytical closed-form solutions infeasible in this continuous-type setting. To overcome this challenge, we restrict the strategy spaces to finite-dimensional differentiable sub-manifolds—specifically, one-parameter families of oscillatory functions (cosine, sine, and mixed forms). After suitable affine Q-rescaling to map the oscillatory range into the production interval [0, Q], and with parameter ranges satisfying α, β>(π/2)/c*, these curves ensure near-exhaustivity: the joint production map (α, β)(xα(s), yβ(t)) covers [0, Q]2 densely for every fixed cost pair (s, t), thereby recovering (up to density and closure) the full ex-post payoff space. We introduce the ex-post payoff mapping Φ(s, t, x, y)=(es(x, y)(t), ft(x, y)(s)), which collects every realizable payoff pair once nature draws the types and players select their strategies. The image of Φ defines the general payoff space of the game, and its non-dominated points constitute the general ex-post Pareto frontier—all efficient realized outcomes across type-strategy realizations, without dependence on private probability measures over types. Using multi-objective genetic algorithms, we numerically approximate this frontier (and selected collusive compromises) within the restricted but representative sub-manifolds. The resulting frontiers are computationally accessible, robust to parameter variations, and validated through hypervolume convergence, sensitivity analysis, and comparisons with NSGA-II, PSO and scalarization methods. The findings are significant because they provide decision-makers in oligopolistic markets (e.g., electric vehicles) with viable, implementable production policies that explore efficient trade-offs under genuine cost uncertainty, without requiring explicit forecasts of the opponent’s type distribution—a limitation of traditional expected-utility approaches. By focusing on ex-post efficiency, the method reveals belief-independent compromise solutions that may guide tacit coordination or collusive outcomes in real-world strategic settings. Full article
(This article belongs to the Special Issue AI in Game Theory: Theory and Applications)
Show Figures

Figure 1

29 pages, 4856 KB  
Article
Evaluating LLMs for Source Code Generation and Summarization Using Machine Learning Classification and Ranking
by Hussain Mahfoodh, Mustafa Hammad, Bassam A. Y. Alqaralleh and Aymen I. Zreikat
Computers 2026, 15(2), 119; https://doi.org/10.3390/computers15020119 - 10 Feb 2026
Viewed by 514
Abstract
The recent use of large language models (LLMs) in code generation and code summarization tasks has been widely adopted by the software engineering community. New LLMs are emerging regularly with improved functionalities, efficiency, and expanding data that allow models to learn more effectively. [...] Read more.
The recent use of large language models (LLMs) in code generation and code summarization tasks has been widely adopted by the software engineering community. New LLMs are emerging regularly with improved functionalities, efficiency, and expanding data that allow models to learn more effectively. The lack of guidelines for selecting the right LLMs for coding tasks makes the selection a subjective choice by developers rather than a choice built on code complexity, code correctness, and linguistic similarity analysis. This research investigates the use of machine learning classification and ranking methods to select the best-suited open-source LLMs for code generation and code summarization tasks. This work conducts a comparison experiment on four open-source LLMs (Mistral, CodeLlama, Gemma 2, and Phi-3) and uses the MBPP coding question dataset to analyze code-generated outputs in terms of code complexity, maintainability, cyclomatic complexity, code structure, and LLM perplexity by collecting these as a set of features. An SVM classification problem is conducted on the highest correlated feature pairs, where the models are evaluated through performance metrics, including accuracy, area under the ROC curve (AUC), precision, recall, and F1 scores. The RankNet ranking methodology is used to evaluate code summarization model capabilities by measuring ROUGE and BERTScore accuracies between LLM code-generated summaries and the coding questions used from the dataset. The study results show a maximum accuracy of 49% for the code generation experiment, with the highest AUC score reaching 86% among the top four correlated feature pairs. The highest precision score reached is 90%, and the recall score reached up to 92%. Code summarization experiment results show Gemma 2 scored a 1.93 RankNet win probability score, and represented the highest ranking reached among other models. The phi3 model was the second-highest ranking with a 1.66 score. The research highlights the potential of machine learning to select LLMs based on coding metrics and paves the way for advancements in terms of accuracy, dataset diversity, and exploring other machine learning algorithms for other researchers. Full article
(This article belongs to the Special Issue AI in Action: Innovations and Breakthroughs)
Show Figures

Graphical abstract

18 pages, 1887 KB  
Article
Depressive Symptoms, Functional Status, and Cardiovascular Parameters in Patients with Major Depressive Disorder Undergoing Cardiac Rehabilitation While Treated with Vortioxetine: A Prospective Observational Study
by Iván Hoditx Martín-Costa, Rafael Colman, Álvaro García-Amador, Cristian García-Caballero, Jerónimo Acosta-Rueda, Marta Fontán-Esteban and Yerika Martín-Quero
J. Clin. Med. 2026, 15(4), 1374; https://doi.org/10.3390/jcm15041374 - 10 Feb 2026
Viewed by 268
Abstract
Background/Objectives: Real-world evidence on the use of vortioxetine within cardiac rehabilitation (CR) programs among patients with major depressive disorder (MDD) remains limited. We aimed to describe the evolution of depressive symptoms, functional status, and selected cardiovascular and biometric parameters in patients with MDD [...] Read more.
Background/Objectives: Real-world evidence on the use of vortioxetine within cardiac rehabilitation (CR) programs among patients with major depressive disorder (MDD) remains limited. We aimed to describe the evolution of depressive symptoms, functional status, and selected cardiovascular and biometric parameters in patients with MDD undergoing CR while receiving vortioxetine in routine clinical practice. Methods: This was a 12-week, prospective, observational study conducted at Clínica Colman (Cádiz, Spain) between July 2022 and July 2024. Adults diagnosed with MDD who were undergoing CR and receiving vortioxetine as part of routine care were included. Depressive symptoms (Hamilton Depression Rating Scale, HAM-D) and functional impairment (Sheehan Disability Scale, SDS) were assessed at baseline and at weeks 3, 7, and 12. Cardiovascular and biometric parameters were measured at baseline and at week 12. Repeated-measures ANOVA and paired t-tests were used for statistical analysis. Results: Forty-nine patients were included (mean age 65.6 years; 41% women). Over the 12-week follow-up period, mean HAM-D and SDS scores decreased over time (both p < 0.001). Changes were also observed in VO2 max, body weight, body mass index, and waist circumference (all p < 0.05). Left ventricular ejection fraction, blood pressure, and QTc interval showed no relevant variation during follow-up. Mild adverse effects were reported in 6.1% of patients. Conclusions: In patients with MDD and undergoing CR while receiving vortioxetine, longitudinal changes were observed in psychological, functional, and selected cardiovascular measures. These real-world data describe clinical trajectories within integrated rehabilitation settings and provide hypothesis-generating evidence for future controlled studies. Full article
(This article belongs to the Section Cardiovascular Medicine)
Show Figures

Figure 1

24 pages, 27057 KB  
Article
Super-Resolution Reconstruction of Gravity Data Using Semi-Supervised Dual Regression Learning
by Bode Jia, Jian Sun, Xiangfeng Geng, Xiaolei Wan and Huaishan Liu
Remote Sens. 2026, 18(3), 453; https://doi.org/10.3390/rs18030453 - 1 Feb 2026
Viewed by 324
Abstract
High-resolution (HR) marine gravity data are critical for geophysical modeling, seafloor mapping, and tectonic analysis. However, acquiring such data remains challenging due to the inherent trade-offs between distinct measurement sources. While shipborne gravity surveys offer high accuracy and resolution, they are spatially sparse [...] Read more.
High-resolution (HR) marine gravity data are critical for geophysical modeling, seafloor mapping, and tectonic analysis. However, acquiring such data remains challenging due to the inherent trade-offs between distinct measurement sources. While shipborne gravity surveys offer high accuracy and resolution, they are spatially sparse and geographically restricted; conversely, satellite altimetry provides global coverage but comes at the expense of reduced resolution and increased noise. To address this challenge, we propose a semi-supervised dual regression learning (SDRL) framework for gravity field super-resolution (SR) that synergizes the strengths of both data types. By jointly training on a limited number of paired shipborne-satellite samples and a large set of unpaired satellite observations, SDRL leverages cycle-consistent learning to preserve cross-domain structural integrity and enhance generalization. Extensive experiments under varying data conditions—including noisy, ideal, and label-scarce scenarios—demonstrate that SDRL consistently outperforms purely supervised models in terms of structural similarity and error reduction. Moreover, SDRL exhibits strong robustness against data imperfections and generalizes effectively to geophysically distinct test regions. These results highlight the practical advantages of semi-supervised learning for global marine gravity field reconstruction, particularly in real-world settings where high-quality labeled data are scarce. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
Show Figures

Figure 1

18 pages, 10969 KB  
Article
Simulation Data-Based Dual Domain Network (Sim-DDNet) for Motion Artifact Reduction in MR Images
by Seong-Hyeon Kang, Jun-Young Chung, Youngjin Lee and for The Alzheimer’s Disease Neuroimaging Initiative
Magnetochemistry 2026, 12(1), 14; https://doi.org/10.3390/magnetochemistry12010014 - 20 Jan 2026
Viewed by 420
Abstract
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with [...] Read more.
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with simplified motion patterns, thereby limiting physical plausibility and generalization. We propose Sim-DDNet, a simulation-data-based dual-domain network that combines k-space-based motion simulation with a joint image-k-space reconstruction architecture. Motion-corrupted data were generated from T2-weighted Alzheimer’s Disease Neuroimaging Initiative brain MR scans using a k-space replacement scheme with three to five random rotational and translational events per volume, yielding 69,283 paired samples (49,852/6969/12,462 for training/validation/testing). Sim-DDNet integrates a real-valued U-Net-like image branch and a complex-valued k-space branch using cross attention, FiLM-based feature modulation, soft data consistency, and composite loss comprising L1, structural similarity index measure (SSIM), perceptual, and k-space-weighted terms. On the independent test set, Sim-DDNet achieved a peak signal-to-noise ratio of 31.05 dB, SSIM of 0.85, and gradient magnitude similarity deviation of 0.077, consistently outperforming U-Net and U-Net++ across all three metrics while producing less blurring, fewer residual ghost/streak artifacts, and reduced hallucination of non-existent structures. These results indicate that dual-domain, data-consistency-aware learning, which explicitly exploits k-space information, is a promising approach for physically plausible motion artifact correction in brain MRI. Full article
(This article belongs to the Special Issue Magnetic Resonances: Current Applications and Future Perspectives)
Show Figures

Figure 1

36 pages, 3446 KB  
Article
Neurodegenerative Disease-Specific Relations Between Temporal and Kinetic Gait Features Identified Using InterCriteria Analysis
by Irena Jekova, Vessela Krasteva and Todor Stoyanov
Mathematics 2026, 14(2), 340; https://doi.org/10.3390/math14020340 - 19 Jan 2026
Viewed by 317
Abstract
Gait analysis is a non-invasive, cost-effective method for detecting subtle motor changes in neurodegenerative disorders. This study uses an exploratory approach to identify temporal–kinetic gait feature relationships specific to amyotrophic lateral sclerosis (ALS) and Huntington (HUNT) and Parkinson (PARK) disease versus healthy controls [...] Read more.
Gait analysis is a non-invasive, cost-effective method for detecting subtle motor changes in neurodegenerative disorders. This study uses an exploratory approach to identify temporal–kinetic gait feature relationships specific to amyotrophic lateral sclerosis (ALS) and Huntington (HUNT) and Parkinson (PARK) disease versus healthy controls (CONTROL) using recent advances in InterCriteria Analysis (ICrA). The novelty lies in the (i) comprehensive temporal–kinetic feature set, (ii) use of ICrA to characterize inter-feature coordination patterns at population and disease-group levels and (iii) interpretation in a neuromechanical context. Forty-one temporal/kinetic features were extracted from left/right leg ground reaction force and rate-of-force-development signals, considering laterality, gait phase (stance, swing, double support), magnitudes, waveform correlations, and inter-/intra-limb asymmetries. The analysis included 14,580 steps from 64 recordings in the Gait in Neurodegenerative Disease Database: 16 CONTROL (4054 steps), 13 ALS (2465), 20 HUNT (4730), 15 PARK (3331). Sensitivity analysis identified strict consonance thresholds (μ ≥ 0.75, ν ≤ 0.25), selecting <5% strongest inter-feature relations from 820 feature pairs: population level (16 positive, 14 negative), group-level (15–25 positive, 9–14 negative). ICrA identified group-specific consonances—present in one group but absent in others—highlighting disease-related alterations in gait coordination: ALS (15/11 positive/negative, disrupted bilateral stride coordination, prolonged stance/double-support, decoupled stride/cadence, desynchronized force-generation patterns—reflecting compensatory adaptations to muscle weakness and instability), HUNT (11/7, severe temporal–kinetic breakdown consistent with gait instability—loss of bilateral coordination, reduced swing time, slowed force development), PARK (1/2, subtle localized disruptions—prolonged stance and double-support intervals, reduced force during weight transfer, overall coordination remained largely preserved). Benchmarking vs. Pearson correlation showed strong linear agreement (R2 = 0.847, p < 0.001), confirming that ICrA captures dominant dependencies while moderating the correlation via uncertainty. These results demonstrate that ICrA provides a quantitative, interpretable framework for characterizing gait coordination patterns and can guide principled feature selection in future predictive modeling. Full article
(This article belongs to the Special Issue Advanced Intelligent Algorithms for Decision Making Under Uncertainty)
Show Figures

Figure 1

12 pages, 216 KB  
Brief Report
Enhancing Interactive Teaching for the Next Generation of Nurses: Generative-AI-Assisted Design of a Full-Day Professional Development Workshop
by Su-I Hou
Informatics 2026, 13(1), 11; https://doi.org/10.3390/informatics13010011 - 15 Jan 2026
Viewed by 476
Abstract
Introduction: Nursing educators and clinical leaders face persistent challenges in engaging the next generation of nurses, often characterized by short attention spans, frequent phone use, and underdeveloped communication skills. This article describes the design and delivery of a full-day interactive teaching workshop for [...] Read more.
Introduction: Nursing educators and clinical leaders face persistent challenges in engaging the next generation of nurses, often characterized by short attention spans, frequent phone use, and underdeveloped communication skills. This article describes the design and delivery of a full-day interactive teaching workshop for nursing faculty, senior clinical nurses, and nurse leaders, developed using a design-thinking approach supported by generative AI. Methods: The workshop comprised four thematic sessions: (1) Learning styles across generations, (2) Interactive teaching methods, (3) Application of interactive teaching strategies, and (4) Lesson planning and transfer. Generative AI was used during planning to create icebreakers, discussion prompts, clinical teaching scenarios, and application templates. Design decisions emphasized low-tech, low-prep strategies suitable for spontaneous clinical teaching, thereby reducing barriers to adoption. Activities included emoji-card introductions, quick generational polls, colored-paper reflections, portable whiteboard brainstorming, role plays, fishbowl discussions, gallery walks, and movement-based group exercises. Participants (N = 37) were predominantly female (95%) and represented multiple generations of X, Y, and Z. Mid- and end-of-workshop reflection prompts were embedded within Sessions 2 and 4, with participants recording their responses on colored papers, which were then compiled into a single Word document for thematic analysis. Results: Thematic analysis of 59 mid- and end-workshop reflections revealed six interconnected themes, grouped into three categories: (1) engagement and experiential learning, (2) practical applicability and generational awareness, and (3) facilitation, environment, and motivation. Participants emphasized the workshop’s lively pace and hands-on design. Experiencing strategies firsthand built confidence for application, while generational awareness encouraged reflection on adapting methods for younger learners. The facilitator’s passion, personable approach, and structured use of peer learning created a psychologically safe and motivating climate, leaving participants recharged and inspired to integrate interactive methods. Discussion: The workshop illustrates how AI-assisted, design-thinking-driven professional development can model effective strategies for next-generation learners. When paired with skilled facilitation, AI-supported planning enhances engagement, fosters reflective practice, and promotes immediate transfer of interactive strategies into diverse teaching settings. Full article
18 pages, 1289 KB  
Article
Machine Learning-Based Automatic Diagnosis of Osteoporosis Using Bone Mineral Density Measurements
by Nilüfer Aygün Bilecik, Levent Uğur, Erol Öten and Mustafa Çapraz
J. Clin. Med. 2026, 15(2), 549; https://doi.org/10.3390/jcm15020549 - 9 Jan 2026
Cited by 1 | Viewed by 444
Abstract
Background: Osteoporosis and osteopenia are prevalent bone diseases characterized by reduced bone mineral density (BMD) and an increased risk of fractures, particularly in postmenopausal women. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, it has limitations regarding accessibility, cost, and [...] Read more.
Background: Osteoporosis and osteopenia are prevalent bone diseases characterized by reduced bone mineral density (BMD) and an increased risk of fractures, particularly in postmenopausal women. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, it has limitations regarding accessibility, cost, and predictive capacity for fracture risk. Machine learning (ML) approaches offer an opportunity to develop automated and more accurate diagnostic models by incorporating both BMD values and clinical variables. Method: This study retrospectively analyzed BMD data from 142 postmenopausal women, classified into 3 diagnostic groups: normal, osteopenia, and osteoporosis. Various supervised ML algorithms—including Support Vector Machines (SVM), k-Nearest Neighbors (k-NN), Decision Trees (DT), Naive Bayes (NB), Linear Discriminant Analysis (LDA), and Artificial Neural Networks (ANN)—were applied. Feature selection techniques such as ANOVA, CHI2, MRMR, and Kruskal–Wallis were used to enhance model performance, reduce dimensionality, and improve interpretability. Model performance was evaluated using 10-fold cross-validation based on accuracy, true positive rate (TPR), false negative rate (FNR), and AUC values. Results: Among all models and feature selection combinations, SVM with ANOVA-selected features achieved the highest classification accuracy (94.30%) and 100% TPR for the normal class. Feature sets based on traditional diagnostic regions (L1–L4, femoral neck, total femur) also showed high accuracy (up to 90.70%) but were generally outperformed by statistically selected features. CHI2 and MRMR methods also yielded robust results, particularly when paired with SVM and k-NN classifiers. The results highlight the effectiveness of combining statistical feature selection with ML to enhance diagnostic precision for osteoporosis and osteopenia. Conclusions: Machine learning algorithms, when integrated with data-driven feature selection strategies, provide a promising framework for automated classification of osteoporosis and osteopenia based on BMD data. ANOVA emerged as the most effective feature selection method, yielding superior accuracy across all classifiers. These findings support the integration of ML-based decision support tools into clinical workflows to facilitate early diagnosis and personalized treatment planning. Future studies should explore more diverse and larger datasets, incorporating genetic, lifestyle, and hormonal factors for further model enhancement. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

18 pages, 5554 KB  
Article
The Assimilation of CFOSAT Wave Heights Using Statistical Background Errors
by Leqiang Sun, Natacha Bernier, Benoit Pouliot, Patrick Timko and Lotfi Aouf
Remote Sens. 2026, 18(2), 217; https://doi.org/10.3390/rs18020217 - 9 Jan 2026
Viewed by 300
Abstract
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort [...] Read more.
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort to address the conventional, simplified, homogeneous assumptions made in previous studies using Optimal Interpolation (OI) to generate Hs analysis. A map of Best Correlation Length, L, is generated to count for the inhomogeneity in the wave field. This map was calculated from pairs of Hs forecasts of two grid points shifted in space and time from which a look-up table is derived and used to infer the spatial extent of correlations within the wave field. The wave spectra are then updated from Hs analysis using a frequency shift scheme. Results reveal significant spatial variance in the distribution of L, with notably high values located in the eastern tropical Pacific Ocean, a pattern that is expected due to the persistent swells dominating in this region. Experiments are conducted with spatially varying correlation lengths and a set correlation length of eight grid points in the analysis step. Forecasts from these analyses are validated independently with the Global Telecommunications System buoys and the Copernicus Marine Environment Monitoring Service (CMEMS) altimetry wave height observations. It is found that the proposed statistical method generally outperforms the conventional method with lower standard deviation and bias for both Hs and peak period forecasts. The conventional method has more drastic corrections on Hs forecasts, but such corrections are not robust, particularly in regions with relatively short spatial correlation length scales. Based on the analysis of the CMEMS comparison, the globally varying correlation length produces a positive increment of the Hs forecast, which is globally associated with forecast error reduction lasting up to 24 h into the forecast. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

38 pages, 503 KB  
Article
Zappa–Szép Skew Braces: A Unified Framework for Mutual Interactions in Noncommutative Algebra
by Suha Wazzan and David A. Oluyori
Mathematics 2026, 14(2), 215; https://doi.org/10.3390/math14020215 - 6 Jan 2026
Viewed by 365
Abstract
This paper introduces and systematically develops the theory of Zappa–Szép skew braces, a novel algebraic structure that provides a unified framework for bidirectional group interactions, thereby generalizing the classical constructions of semidirect skew braces and matched-pair factorizations (ZS1–ZS4, BC1–BC2). We establish the [...] Read more.
This paper introduces and systematically develops the theory of Zappa–Szép skew braces, a novel algebraic structure that provides a unified framework for bidirectional group interactions, thereby generalizing the classical constructions of semidirect skew braces and matched-pair factorizations (ZS1–ZS4, BC1–BC2). We establish the complete axiomatic foundation for these objects, characterizing them through necessary and sufficient compatibility conditions that encode mutual actions between two digroups. Central results include a semidirect embedding theorem, explicit constructions of nontrivial examples—notably a fully mutual brace of order 12 built from V4 and C3—and a detailed analysis of key structural invariants such as the socle, center, and automorphism groups. The framework is further elucidated via universal properties and categorical adjunctions, positioning Zappa–Szép skew braces as fundamental objects within noncommutative algebra. Applications to representation theory, cohomology, and the construction of set-theoretic solutions to the Yang–Baxter equation are derived, demonstrating both the generality and utility of the theory. Full article
Show Figures

Figure 1

16 pages, 1639 KB  
Article
Distant Hybridization of Kazakh Wheat Varieties with Wild Aegilops Species: Cytogenetic Compatibility, Fertilization Dynamics, and Breeding Implications
by Kenenbay Kozhakhmetov, Sholpan Bastaubayeva, Nazira Slyamova, Altynai Zhakataeva, Kasymkhan Koylanov and Zhandos Zholdasbayuly
Agronomy 2026, 16(1), 128; https://doi.org/10.3390/agronomy16010128 - 5 Jan 2026
Viewed by 374
Abstract
Distant hybridization between bread wheat (Triticum aestivum L.) and wild Aegilops species is a valuable approach to take to broaden genetic diversity, but it is frequently impeded by reproductive barriers. This study evaluated crossability, pollen tube dynamics, meiotic behavior, somatic chromosome numbers, [...] Read more.
Distant hybridization between bread wheat (Triticum aestivum L.) and wild Aegilops species is a valuable approach to take to broaden genetic diversity, but it is frequently impeded by reproductive barriers. This study evaluated crossability, pollen tube dynamics, meiotic behavior, somatic chromosome numbers, and pollen fertility in twelve Kazakh wheat cultivars crossed with Ae. triaristata Willd., Ae. cylindrica Host, Ae. triuncialis L., and Ae. squarrosa L. under field-based controlled pollination. Hybridization success varied significantly among combinations, with Ae. triaristata showing the highest compatibility (26.0% in Bezostaya 1 × Ae. triaristata), while Ae. squarrosa produced the lowest seed set. In compatible crosses, pollen tubes reached the ovary within 20–30 min, whereas delayed elongation (>60 min) was associated with fertilization failure. Meiotic analysis revealed incomplete homologous pairing (3–7 bivalents per PMC) and high abnormality rates (>90%). Somatic chromosome counts (2n) of selected F1 hybrids confirmed extensive aneuploidy and partial chromosome elimination. Pollen fertility was generally below 20%. These results identify Ae. triaristata as a promising donor species for pre-breeding in Kazakhstan and underscores the importance of integrating classical cytology with molecular approaches to overcome hybridization barriers. Full article
(This article belongs to the Section Crop Breeding and Genetics)
Show Figures

Figure 1

16 pages, 2679 KB  
Systematic Review
High-Flow Nasal Cannula Outside the ICU: A Systematic Review and Meta-Analysis
by Andrea Boccatonda, Alice Brighenti, Damiano D’Ardes and Luigi Vetrugno
J. Clin. Med. 2026, 15(1), 97; https://doi.org/10.3390/jcm15010097 - 23 Dec 2025
Cited by 1 | Viewed by 789
Abstract
Background: Use of high-flow nasal cannula (HFNC) expanded from ICUs to internal medicine/respiratory wards during and after the COVID-19 pandemic, but safety and effectiveness in non-ICU settings remain uncertain. Methods: We performed a systematic review and meta-analysis of adults (≥18 years) [...] Read more.
Background: Use of high-flow nasal cannula (HFNC) expanded from ICUs to internal medicine/respiratory wards during and after the COVID-19 pandemic, but safety and effectiveness in non-ICU settings remain uncertain. Methods: We performed a systematic review and meta-analysis of adults (≥18 years) initiated on HFNC in non-ICU wards. Primary outcomes were in-hospital (or 28-day) mortality and ICU transfer; where available, we compared mortality for HFNC vs. conventional oxygen therapy (COT) in do-not-intubate (DNI) cohorts. Observational studies and trials were eligible. Random-effects models synthesized proportions and risk ratios; risk of bias (ROBINS-I/RoB 2) and certainty (GRADE) were assessed. Results: Ten studies met the inclusion criteria for any-ward HFNC; subsets contributed data to pooled analyses. Across all non-ICU wards (general wards plus step-up IMCU/HDU), pooled mortality was 14.0% (95% CI 4.6–35.5; I2 ≈ 92%). Pooled ICU transfer after ward/step-up HFNC start was 20.0% (95% CI 6.3–48.1; I2 ≈ 97%). Restricted to internal medicine/respiratory wards, pooled mortality was 19.8% (95% CI 7.1–44.2; I2 ≈ 95%) and ICU transfer 31.2% (95% CI 9.9–65.0; I2 ≈ 97%). In step-up units (IMCU/HDU), ICU transfer appeared lower and less variable (22.0% [95% CI 16.5–28.8]; I2 ≈ 10%), suggesting environment-dependent outcomes. In a multicenter DNI COVID-19 cohort, HFNC vs. COT showed no clear mortality difference (RR ≈ 0.90, 95% CI 0.75–1.08; adjusted OR ≈ 0.72, 95% CI 0.34–1.54). Certainty of evidence for all critical outcomes was very low due to observational design, high inconsistency, and imprecision. Conclusions: HFNC outside the ICU is feasible, but it is related to nontrivial mortality and frequent escalation—particularly on general wards—while step-up units demonstrate more reproducible trajectories. Outcomes appear strongly conditioned by care environment, staffing, monitoring, and escalation pathways. Given very low certainty and substantial heterogeneity, institutions should pair ward HFNC with protocolized reassessment and rapid response/ICU outreach, and future research should prospectively compare ward HFNC pathways against optimized COT/NIV using standardized outcomes. Full article
Show Figures

Figure 1

Back to TopTop