Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,154)

Search Parameters:
Keywords = bias field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2743 KB  
Review
Secondary Education Teachers and Climate Change Education: A Complementary Bibliometric and Methodological Review
by Antonio García-Vinuesa, Jorge Conde Miguélez, Mayara Palmieri and Andrea Correa-Chica
Metrics 2026, 3(1), 1; https://doi.org/10.3390/metrics3010001 - 13 Jan 2026
Abstract
Climate change is the most significant socio-environmental challenges of our time, and education has been recognized as a fundamental strategy to confront it. Yet research efforts have focused more on students than on teachers, despite the latter’s key role in mediating between scientific [...] Read more.
Climate change is the most significant socio-environmental challenges of our time, and education has been recognized as a fundamental strategy to confront it. Yet research efforts have focused more on students than on teachers, despite the latter’s key role in mediating between scientific and curricular knowledge and classroom practice. This study set out to characterize the field of educational research on climate change from the perspective of secondary school teachers. To this end, we conducted a systematic review and bibliometric analysis of 50 peer-reviewed studies from 15 countries (2010–2023). The results show a growing interest over time, with increases associated with international milestones such as the IPCC reports and the Paris Agreement, while declines are observed in connection with political shifts and the COVID-19 pandemic. Consolidated academic reference points were identified, including Eric Plutzer and Maria Ojala, alongside influential international organizations such as the IPCC and UNESCO, suggesting the presence of schools of thought and institutional frameworks that structure the field. Methodologically, descriptive and exploratory studies predominate, with a notable reliance on qualitative and mixed-methods designs using small samples, reinforcing the difficulty of accessing teachers as a research population. Overall, this review highlights significant gaps, particularly the geographical bias toward the Global North, and underscores the urgency of broader, more inclusive, and critically engaged research that positions teachers as essential agents of transformative educational responses to the climate crisis. Full article
Show Figures

Figure 1

29 pages, 6013 KB  
Article
Data-Driven Multidecadal Reconstruction and Nowcasting of Coastal and Offshore 3-D Sea Temperature Fields from Satellite Observations: A Case Study in the East/Japan Sea
by Eun-Joo Lee, Yerin Hwang, Young-Taeg Kim, SungHyun Nam and Jae-Hun Park
Remote Sens. 2026, 18(2), 246; https://doi.org/10.3390/rs18020246 - 13 Jan 2026
Abstract
Understanding ocean temperature structure and its spatiotemporal variability is essential for studying ocean circulation, climate, and marine ecosystems. While previous approaches using observations and numerical models have advanced our understanding, they face limitations such as sparse data coverage and computational bias. To address [...] Read more.
Understanding ocean temperature structure and its spatiotemporal variability is essential for studying ocean circulation, climate, and marine ecosystems. While previous approaches using observations and numerical models have advanced our understanding, they face limitations such as sparse data coverage and computational bias. To address these issues, we developed an ensemble of data-driven neural network models trained with in situ vertical profiles and daily remote sensing inputs. Unlike previous studies that were limited to open-ocean regions, our model explicitly included coastal areas with complex bathymetry. The model was applied to the East/Japan Sea and reconstructed 31 years (1993–2023) of daily three-dimensional ocean temperature fields at 13 standard depths. The predictions were validated against observations, showing RMSE < 1.33 °C and bias < 0.10 °C. Comparisons with previous studies confirmed the model’s ability to capture short- to mid-term temperature variations. This data-driven approach demonstrates a robust alternative to traditional methods and offers an applicable and reliable tool for understanding long-term ocean variability in marginal seas. Full article
Show Figures

Figure 1

28 pages, 1807 KB  
Review
Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava
by Wasiu Akande Ahmed, Olayinka Ademola Abiola, Dongkai Yang, Seyi Festus Olatoyinbo and Guifei Jing
Horticulturae 2026, 12(1), 87; https://doi.org/10.3390/horticulturae12010087 - 12 Jan 2026
Viewed by 27
Abstract
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes [...] Read more.
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes current advances in combining unmanned aerial vehicles (UAVs) with deep learning (DL) to enable scalable, data-driven cassava disease detection. It examines UAV platforms, sensor technologies, flight protocols, image preprocessing pipelines, DL architectures, and existing datasets, and it evaluates how these components interact within UAV–DL disease-monitoring frameworks. The review also compares model performance across convolutional neural network-based and Transformer-based architectures, highlighting metrics such as accuracy, recall, F1-score, inference speed, and deployment feasibility. Persistent challenges—such as limited UAV-acquired datasets, annotation inconsistencies, geographic model bias, and inadequate real-time deployment—are identified and discussed. Finally, the paper proposes a structured research agenda including lightweight edge-deployable models, UAV-ready benchmarking protocols, and multimodal data fusion. This review provides a consolidated reference for researchers and practitioners seeking to develop practical and scalable cassava-disease detection systems. Full article
12 pages, 2717 KB  
Article
Photoconductive Gain Behavior of Ni/β-Ga2O3 Schottky Barrier Diode-Based UV Detectors
by Viktor V. Kopyev, Nikita N. Yakovlev, Alexander V. Tsymbalov, Dmitry A. Almaev and Pavel V. Kosmachev
Micromachines 2026, 17(1), 100; https://doi.org/10.3390/mi17010100 - 12 Jan 2026
Viewed by 73
Abstract
A vertical Ni/β-Ga2O3 Schottky barrier diode was fabricated on an unintentionally doped bulk (−201)-oriented β-Ga2O3 single crystal and investigated with a focus on the underlying photoresponse mechanisms. The device exhibits well-defined rectifying behavior, characterized by a Schottky [...] Read more.
A vertical Ni/β-Ga2O3 Schottky barrier diode was fabricated on an unintentionally doped bulk (−201)-oriented β-Ga2O3 single crystal and investigated with a focus on the underlying photoresponse mechanisms. The device exhibits well-defined rectifying behavior, characterized by a Schottky barrier height of 1.63 eV, an ideality factor of 1.39, and a high rectification ratio of ~9.7 × 106 arb. un. at an applied bias of ±2 V. The structures demonstrate pronounced sensitivity to deep-ultraviolet radiation (λ ≤ 280 nm), with maximum responsivity observed at 255 nm, consistent with the wide bandgap of β-Ga2O3. Under 254 nm illumination at a power density of 620 μW/cm2, the device operates in a self-powered mode, generating an open-circuit voltage of 50 mV and a short-circuit current of 47 pA, confirming efficient separation of photogenerated carriers by the built-in electric field of the Schottky junction. The responsivity and detectivity of the structures increase from 0.18 to 3.87 A/W and from 9.8 × 108 to 4.3 × 1011 Hz0.5cmW−1, respectively, as the reverse bias rises from 0 to −45 V. The detectors exhibit high-speed performance, with rise and decay times not exceeding 29 ms and 59 ms, respectively, at an applied voltage of 10 V. The studied structures demonstrate internal gain, with the external quantum efficiency reaching 1.8 × 103%. Full article
Show Figures

Figure 1

20 pages, 4195 KB  
Article
Electro-Physical Model of Amorphous Silicon Junction Field-Effect Transistors for Energy-Efficient Sensor Interfaces in Lab-on-Chip Platforms
by Nicola Lovecchio, Giulia Petrucci, Fabio Cappelli, Martina Baldini, Vincenzo Ferrara, Augusto Nascetti, Giampiero de Cesare and Domenico Caputo
Chips 2026, 5(1), 1; https://doi.org/10.3390/chips5010001 - 12 Jan 2026
Viewed by 43
Abstract
This work presents an advanced electro-physical model for hydrogenated amorphous silicon (a-Si:H) Junction Field Effect Transistors (JFETs) to enable the design of devices with energy-efficient analog interface building blocks for Lab-on-Chip (LoC) systems. The presence of this device can support monolithic integration with [...] Read more.
This work presents an advanced electro-physical model for hydrogenated amorphous silicon (a-Si:H) Junction Field Effect Transistors (JFETs) to enable the design of devices with energy-efficient analog interface building blocks for Lab-on-Chip (LoC) systems. The presence of this device can support monolithic integration with thin-film sensors and circuit-level design through a validated compact formulation. The model accurately describes the behavior of a-Si:H JFETs addressing key physical phenomena, such as the channel thickness dependence on the gate-source voltage when the channel approaches full depletion. A comprehensive framework was developed, integrating experimental data and mathematical refinements to ensure robust predictions of JFET performance across operating regimes, including the transition toward full depletion and the associated current-limiting behavior. The model was validated through a broad set of fabricated devices, demonstrating excellent agreement with experimental data in both the linear and saturation regions. Specifically, the validation was carried out at 25 °C on 15 fabricated JFET configurations (12 nominally identical devices per configuration), using the mean characteristics of 9 devices with standard-deviation error bars. In the investigated bias range, the devices operate in a sub-µA regime (up to several hundred nA), which naturally supports µW-level dissipation for low-power interfaces. This work provides a compact, experimentally validated modeling basis for the design and optimization of a-Si:H JFET-based LoC front-end/readout circuits within technology-constrained and energy-efficient operating conditions. Full article
Show Figures

Graphical abstract

36 pages, 5342 KB  
Review
Artificial Intelligence in Medical Diagnostics: Foundations, Clinical Applications, and Future Directions
by Dorota Bartusik-Aebisher, Daniel Roshan Justin Raj and David Aebisher
Appl. Sci. 2026, 16(2), 728; https://doi.org/10.3390/app16020728 - 10 Jan 2026
Viewed by 172
Abstract
Artificial intelligence (AI) is rapidly transforming medical diagnostics by allowing for early, accurate, and data-driven clinical decision-making. This review provides an overview of how machine learning (ML), deep learning, and emerging multimodal foundation models have been used in diagnostic procedures across imaging, pathology, [...] Read more.
Artificial intelligence (AI) is rapidly transforming medical diagnostics by allowing for early, accurate, and data-driven clinical decision-making. This review provides an overview of how machine learning (ML), deep learning, and emerging multimodal foundation models have been used in diagnostic procedures across imaging, pathology, molecular analysis, physiological monitoring, and electronic health record (EHR)-integrated decision-support systems. We have discussed the basic computational foundations of supervised, unsupervised, and reinforcement learning and have also shown the importance of data curation, validation metrics, interpretability methods, and feature engineering. The use of AI in many different applications has shown that it can find abnormalities and integrate some features from multi-omics and imaging, which has shown improvements in prognostic modeling. However, concerns about data heterogeneity, model drift, bias, and strict regulatory guidelines still remain and are yet to be addressed in this field. Looking forward, future advancements in federated learning, generative AI, and low-resource diagnostics will pave the way for adaptable and globally accessible AI-assisted diagnostics. Full article
Show Figures

Figure 1

23 pages, 2960 KB  
Article
Multi-Source Data-Driven CNN–Transformer Hybrid Modeling for Wind Energy Database Reconstruction in the Tropical Indian Ocean
by Jintao Xu, Yao Luo, Guanglin Wu, Weiqiang Wang, Zhenqiu Zhang and Arulananthan Kanapathipillai
Remote Sens. 2026, 18(2), 226; https://doi.org/10.3390/rs18020226 - 10 Jan 2026
Viewed by 165
Abstract
This study addresses the issues of sparse observations from buoys in the tropical Indian Ocean and systematic biases in reanalysis products by proposing a daily-mean wind speed reconstruction framework that integrates multi-source meteorological fields. This study also considers the impact of different source [...] Read more.
This study addresses the issues of sparse observations from buoys in the tropical Indian Ocean and systematic biases in reanalysis products by proposing a daily-mean wind speed reconstruction framework that integrates multi-source meteorological fields. This study also considers the impact of different source domains on model pre-training, with the goal of providing reliable data support for wind energy assessment. The model was pre-trained using data from the Americas and tropical Pacific buoys as the source domain and then fine-tuned on Indian Ocean buoys as the target domain. Using annual leave-one-out cross-validation, we evaluated the model’s performance against uncorrected ERA5 and CCMP data while comparing three deep reconstruction models. The results demonstrate that deep models significantly reduce reanalysis bias: the RMSE decreases from approximately 1.00 m/s to 0.88 m/s, while R2 improves by approximately 8.9% and 7.1% compared to ERA5/CCMP, respectively. The Branch CNN–Transformer outperforms standalone LSTM or CNN models in overall accuracy and interpretability, with transfer learning yielding directional gains for specific wind conditions in complex topography and monsoon zones. The 20-year wind energy data reconstructed using this model indicates wind energy densities 60–150 W/m2 higher than in the reanalysis data in open high-wind zones such as the southern Arabian Sea and the Somali coast. This study not only provides a pathway for constructing high-precision wind speed databases for tropical Indian Ocean wind resource assessment but also offers precise quantitative support for delineating priority development zones for offshore wind farms and mitigating near-shore engineering risks. Full article
Show Figures

Figure 1

18 pages, 5554 KB  
Article
The Assimilation of CFOSAT Wave Heights Using Statistical Background Errors
by Leqiang Sun, Natacha Bernier, Benoit Pouliot, Patrick Timko and Lotfi Aouf
Remote Sens. 2026, 18(2), 217; https://doi.org/10.3390/rs18020217 - 9 Jan 2026
Viewed by 117
Abstract
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort [...] Read more.
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort to address the conventional, simplified, homogeneous assumptions made in previous studies using Optimal Interpolation (OI) to generate Hs analysis. A map of Best Correlation Length, L, is generated to count for the inhomogeneity in the wave field. This map was calculated from pairs of Hs forecasts of two grid points shifted in space and time from which a look-up table is derived and used to infer the spatial extent of correlations within the wave field. The wave spectra are then updated from Hs analysis using a frequency shift scheme. Results reveal significant spatial variance in the distribution of L, with notably high values located in the eastern tropical Pacific Ocean, a pattern that is expected due to the persistent swells dominating in this region. Experiments are conducted with spatially varying correlation lengths and a set correlation length of eight grid points in the analysis step. Forecasts from these analyses are validated independently with the Global Telecommunications System buoys and the Copernicus Marine Environment Monitoring Service (CMEMS) altimetry wave height observations. It is found that the proposed statistical method generally outperforms the conventional method with lower standard deviation and bias for both Hs and peak period forecasts. The conventional method has more drastic corrections on Hs forecasts, but such corrections are not robust, particularly in regions with relatively short spatial correlation length scales. Based on the analysis of the CMEMS comparison, the globally varying correlation length produces a positive increment of the Hs forecast, which is globally associated with forecast error reduction lasting up to 24 h into the forecast. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

25 pages, 593 KB  
Article
Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach
by José Manuel Corcuera and José María Oller
Mathematics 2026, 14(2), 240; https://doi.org/10.3390/math14020240 - 8 Jan 2026
Viewed by 149
Abstract
In parametric statistics, it is well established that the canonical measures of estimator performance—such as bias, variance, and mean squared error—are inherently dependent on the parameterization of the model. Consequently, these quantities describe the behavior of an estimator only relative to a particular [...] Read more.
In parametric statistics, it is well established that the canonical measures of estimator performance—such as bias, variance, and mean squared error—are inherently dependent on the parameterization of the model. Consequently, these quantities describe the behavior of an estimator only relative to a particular parameterization, rather than representing intrinsic properties of either the estimator itself or the underlying probability distribution it seeks to estimate. Some years ago, the authors introduced a framework, termed the intrinsic analysis of point estimation, in which tools from information geometry were employed to construct analogues of classical statistical notions that are intrinsic to both the estimator and the associated probability measure. Within this framework, a contravariant vector field was introduced to define the intrinsic bias, while the squared Riemannian distance naturally emerged as the intrinsic analogue of the classical squared distance. Intrinsic counterparts of the Cramér–Rao inequalities, as well as the Rao–Blackwell and Lehmann–Scheffé theorems, were also established. The present work extends the intrinsic analysis—originally founded on the concept of intrinsic risk, a fundamentally local measure of estimator performance—to an approach that characterizes the estimator over an entire region of the parameter space, thereby yielding an intrinsically global perspective. Building upon intrinsic risk, two indices are proposed to evaluate estimator performance within a bounded region: (i) the integral of the intrinsic risk with respect to the Riemannian volume over the specified region, and (ii) the maximum intrinsic risk attained within that region. The Riemannian volume induced by the Fisher information metric on the manifold associated with the parametric model provides a natural means of averaging the intrinsic risk. Using variational methods, integral inequalities of the Cramér–Rao type are derived for the mean squared integrated Rao distance of the estimators, thereby extending previous contributions by several authors. Furthermore, lower bounds for the maximum intrinsic risk are obtained through corresponding integral formulations. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

43 pages, 701 KB  
Review
New Trends in the Use of Artificial Intelligence and Natural Language Processing for Occupational Risks Prevention
by Natalia Orviz-Martínez, Efrén Pérez-Santín and José Ignacio López-Sánchez
Safety 2026, 12(1), 7; https://doi.org/10.3390/safety12010007 - 8 Jan 2026
Viewed by 133
Abstract
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining [...] Read more.
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining importance. In parallel, the digitalization of Industry 4.0/5.0 is generating unprecedented volumes of safety-relevant data and new opportunities to move from reactive analysis to proactive, data-driven prevention. This review maps how artificial intelligence (AI), with a specific focus on natural language processing (NLP) and large language models (LLMs), is being applied to occupational risk prevention across sectors. A structured search of the Web of Science Core Collection (2013–October 2025), combined OSH-related terms with AI, NLP and LLM terms. After screening and full-text assessment, 123 studies were discussed. Early work relied on text mining and traditional machine learning to classify accident types and causes, extract risk factors and support incident analysis from free-text narratives. More recent contributions use deep learning to predict injury severity, potential serious injuries and fatalities (PSIF) and field risk control program (FRCP) levels and to fuse textual data with process, environmental and sensor information in multi-source risk models. The latest wave of studies deploys LLMs, retrieval-augmented generation and vision–language architectures to generate task-specific safety guidance, support accident investigation, map occupations and job tasks and monitor personal protective equipment (PPE) compliance. Together, these developments show that AI-, NLP- and LLM-based systems can exploit unstructured OSH information to provide more granular, timely and predictive safety insights. However, the field is still constrained by data quality and bias, limited external validation, opacity, hallucinations and emerging regulatory and ethical requirements. In conclusion, this review positions AI and LLMs as tools to support human decision-making in OSH and outlines a research agenda centered on high-quality datasets and rigorous evaluation of fairness, robustness, explainability and governance. Full article
(This article belongs to the Special Issue Advances in Ergonomics and Safety)
Show Figures

Figure 1

22 pages, 3994 KB  
Article
Sustainable Safety Planning on Two-Lane Highways: A Random Forest Approach for Crash Prediction and Resource Allocation
by Fahmida Rahman, Cidambi Srinivasan, Xu Zhang and Mei Chen
Sustainability 2026, 18(2), 635; https://doi.org/10.3390/su18020635 - 8 Jan 2026
Viewed by 86
Abstract
During the safety planning stage, accurate crash prediction tools are critical for prioritizing countermeasures and allocating resources effectively. Traditional statistical approaches, while long applied in this field, often depend on distributional assumptions that may introduce bias and limit model accuracy. To address these [...] Read more.
During the safety planning stage, accurate crash prediction tools are critical for prioritizing countermeasures and allocating resources effectively. Traditional statistical approaches, while long applied in this field, often depend on distributional assumptions that may introduce bias and limit model accuracy. To address these issues, studies have started exploring Machine Learning (ML)-based techniques for crash prediction, particularly for higher functional class roads. However, the application of ML models on two-lane highways remains relatively limited. This study aims to develop an approach to integrate traffic, geometric, and critically, speed-based factors in crash prediction using Random Forest (RF) and SHapley Additive exPlanations (SHAP) techniques. Comparative analysis shows that the RF model improves crash prediction accuracy by up to 25% over the traditional Zero-Inflated Negative Binomial model. SHAP analysis identified AADT, segment length, and average speed as the three most influential predictors of crash frequency, with speed emerging as a key operational factor alongside traditional exposure measures. The strong influence of speed in the RF–SHAP results depicts its critical role in the safety performance of two-lane highways and highlights the value of incorporating detailed operating characteristics into crash prediction models. Overall, the proposed RF–SHAP framework advances roadway safety assessment by offering both predictive accuracy and interpretability, allowing agencies to identify high-impact factors, prioritize countermeasures, and direct resources more efficiently. In doing so, the approach supports sustainable safety management by enabling evidence-based investments, promoting optimal use of limited transportation funds, and contributing to safer, more resilient mobility systems. Full article
(This article belongs to the Special Issue Sustainable Urban Mobility: Road Safety and Traffic Engineering)
Show Figures

Figure 1

30 pages, 12301 KB  
Article
Deep Learning 1D-CNN-Based Ground Contact Detection in Sprint Acceleration Using Inertial Measurement Units
by Felix Friedl, Thorben Menrad and Jürgen Edelmann-Nusser
Sensors 2026, 26(1), 342; https://doi.org/10.3390/s26010342 - 5 Jan 2026
Viewed by 247
Abstract
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional [...] Read more.
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional neural network (1D-CNN) to improve GC event and GC times detection in sprint acceleration. Methods: Twelve sprint-trained athletes performed 60 m sprints while bilateral shank-mounted IMUs (1125 Hz) and synchronized high-speed video (250 Hz) captured the first 15 m. Video-derived GC events served as reference labels for model training, validation, and testing, using resultant acceleration and angular velocity as model inputs. Results: The optimized model (18 inception blocks, window = 100, stride = 15) achieved mean Hausdorff distances ≤ 6 ms and 100% precision and recall for both validation and test datasets (Rand Index ≥ 0.977). Agreement with video references was excellent (bias < 1 ms, limits of agreement ± 15 ms, r > 0.90, p < 0.001). Conclusions: The 1D-CNN surpassed heuristic and prior machine learning approaches in the sprint acceleration phase, offering robust, near-perfect GC detection. These findings highlight the promise of deep learning-based time-series models for reliable, real-world biomechanical monitoring in sprint acceleration tasks. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

28 pages, 6225 KB  
Article
Optimizing CO2 Concentrations and Emissions Based on the WRF-Chem Model Integrated with the 3DVAR and EAKF Methods
by Wenhao Liu, Xiaolu Ling, Chenggang Li and Botao He
Remote Sens. 2026, 18(1), 174; https://doi.org/10.3390/rs18010174 - 5 Jan 2026
Viewed by 170
Abstract
This study developed a multi-source data assimilation system based on the WRF-Chem model integrated with 3DVAR and EAKF methods. By assimilating a multi-source satellite fused XCO2 concentration dataset, the system achieved simultaneous optimization of CO2 concentration fields and emission fluxes over [...] Read more.
This study developed a multi-source data assimilation system based on the WRF-Chem model integrated with 3DVAR and EAKF methods. By assimilating a multi-source satellite fused XCO2 concentration dataset, the system achieved simultaneous optimization of CO2 concentration fields and emission fluxes over China. During the December 2019 experiment, the system successfully reconstructed high-precision CO2 concentration fields and dynamically corrected the MEIC inventory through emission error inversion derived from concentration differences before and after assimilation. Comparative analysis with the EDGAR inventory demonstrated the superior performance of the EAKF method, which reduced RMSE by 56% and increased the correlation coefficient to 0.360, while the 3DVAR method achieved a 9% RMSE reduction and improved the correlation coefficient to 0.294. In terms of total emissions, 3DVAR and EAKF increased national emissions by 13.6% and 5.1%, respectively, but reduced emissions in Xinjiang by 3.24 MT and 7.99 MT. A comparison of three simulation scenarios (prior emissions, 3DVAR-optimized, and EAKF-optimized) showed significant improvement over the EGG4 dataset, with systematic bias decreasing by approximately 75% and RMSE reduced by about 49%. The assimilation algorithm developed in this study provides a reliable methodological support for regional carbon monitoring and can be extended to multi-pollutant emissions and high-resolution satellite data integration. Full article
Show Figures

Figure 1

35 pages, 635 KB  
Review
Researching Race: A Review of Principal Preparation Literature Through the Lens of Critical Race Methodology
by Rachel Roegman, Osly J. Flores and Joonkil Ahn
Educ. Sci. 2026, 16(1), 67; https://doi.org/10.3390/educsci16010067 - 3 Jan 2026
Viewed by 223
Abstract
The purpose of this systematic review was to synthesize the literature to better understand how the field researched principal preparation in relation to race and racism. Using a critical race theory methodological lens (CRM), we analyzed 36 studies of current candidates or recent [...] Read more.
The purpose of this systematic review was to synthesize the literature to better understand how the field researched principal preparation in relation to race and racism. Using a critical race theory methodological lens (CRM), we analyzed 36 studies of current candidates or recent graduates with an emphasis on the research design and methods. The research chosen for inclusion was (1) empirical, (2) focused on principal preparation programs in the U.S., (3) focused on preparing candidates around issues related to race and racism, and (4) published between 2012 and 2024. Literature was drawn from three major databases that include journals in the field of educational leadership, ERIC, ProQuest, and Education Full Text, in the summer of 2025. It is important to note that our literature search focusing on peer-reviewed articles poses a limitation in terms of the comprehensiveness of the sampled literature, thus excluding potentially important information sources. To analyze the studies, we created a scoring rubric to assess the degree to which each article addressed each CRM tenet. To assess risk of bias, each article was scored by two authors, and the third author also scored the article if the first two disagreed. Our findings show that focus on race and racism was present in most studies reviewed, and almost half centered on the experiences of candidates of color. However, most of the studies reviewed conformed to traditional research paradigms and methods, as illustrated by choices related to frameworks, methods, and data sources. We offer recommendations for researchers of principal preparation who are interested in more critical work related to race and racism, and we argue for increased opportunities for scholars to meet, discuss, and collaborate across institutions around how they are studying leadership preparation for racial equity. The review is registered through Open Science Framework. Full article
(This article belongs to the Special Issue School Leadership and School Improvement, 2nd Edition)
Show Figures

Figure 1

22 pages, 6781 KB  
Article
Magnetic Circuit Design and Optimization of Tension–Compression Giant Magnetostrictive Force Sensor
by Long Li, Hailong Sun, Yingling Wei, Boda Li, Hongwei Cui and Ruifeng Liu
Sensors 2026, 26(1), 295; https://doi.org/10.3390/s26010295 - 2 Jan 2026
Viewed by 407
Abstract
The variable-pitch connecting rod of a helicopter bears axial tensile and compressive loads during operation. The traditional load monitoring method using strain gauge is easily affected by external conditions. Therefore, a giant magnetostrictive (GM) tension and compression force sensor with permanent magnet bias [...] Read more.
The variable-pitch connecting rod of a helicopter bears axial tensile and compressive loads during operation. The traditional load monitoring method using strain gauge is easily affected by external conditions. Therefore, a giant magnetostrictive (GM) tension and compression force sensor with permanent magnet bias is proposed and optimized. Because the bias magnetic field plays a decisive role in the performance of the sensor, this paper has carried out in-depth research on this. Firstly, the mathematical model of the magnetic circuit is established, and the various magnetic circuits of the sensor are simulated and analyzed. Secondly, the magnetic flux uniformity of the GMM rod is used as the evaluation index, and the relative permeability of the magnetic material and the structure are systematically studied. The influence of parameters on the magnetic flux of the magnetic circuit, and finally the optimal parameter combination of the magnetic circuit is determined by orthogonal test. The results show that when the magnetic circuit without the magnetic side wall is used, the magnetic material can better guide the magnetic flux through the GMM rod; the magnetic flux uniformity of the optimized GMM force sensor is increased by 7.44%, the magnetic flux density is increased by 13.9 mT and the Hall output voltage increases linearly by 1.125% in the same proportion. This provides an important reference for improving the utilization rate of GMM rods and also improves the safety of flight operation and reduces maintenance costs. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop