Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (28,216)

Search Parameters:
Keywords = model based techniques

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7148 KB  
Article
A Sensorless Rotor Position Detection Method for Permanent Synchronous Motors Based on High-Frequency Square Wave Voltage Signal Injection
by Anran Song, Zilong Feng, Bo Huang and Bowen Ning
Sensors 2026, 26(1), 28; https://doi.org/10.3390/s26010028 (registering DOI) - 19 Dec 2025
Abstract
To address the torque ripple and speed fluctuation issues in high-frequency square-wave injection-based sensorless control of interior permanent magnet synchronous motors (IPMSM) caused by low-order stator current harmonics (primarily the fifth and seventh), this paper proposes a harmonic voltage compensation strategy based on [...] Read more.
To address the torque ripple and speed fluctuation issues in high-frequency square-wave injection-based sensorless control of interior permanent magnet synchronous motors (IPMSM) caused by low-order stator current harmonics (primarily the fifth and seventh), this paper proposes a harmonic voltage compensation strategy based on a sixth-order quasi-proportional resonant (QPR) controller, which effectively suppresses these specific harmonic disturbances. The proposed method, building upon conventional high-frequency square-wave injection, introduces a harmonic current extraction technique based on multiple synchronous reference frame transformations to separate the fifth and seventh harmonic components accurately; then, according to the established harmonic voltage compensation equation, generates targeted compensation voltage commands; finally, further precisely suppresses the corresponding harmonic currents through a sixth-order QPR controller connected in parallel with the current proportional-integral (PI) controller. This paper comprehensively establishes the mathematical models for harmonic extraction and voltage compensation, and conducts a detailed analysis of the parameter design of the sixth-order QPR controller. Simulation results demonstrate that the proposed strategy can significantly suppress stator current distortion, effectively reduce torque and speed ripples, and substantially improve rotor position estimation accuracy, thereby verifying the superiority of the novel harmonic-suppression-based sensorless control strategy. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

42 pages, 1891 KB  
Article
A Comparative Study of Discretization Methods for Model Predictive Current Control of Permanent Magnet Synchronous Motors
by Nevra Bayhan and Yasin Koçak
Processes 2026, 14(1), 14; https://doi.org/10.3390/pr14010014 (registering DOI) - 19 Dec 2025
Abstract
This study presents a systematic comparative analysis of nine stator current discretization methods within the Model Predictive Current Control (MPCC) framework for Permanent Magnet Synchronous Motors (PMSMs). These methods have generally been examined individually or in limited combinations in previous research, and this [...] Read more.
This study presents a systematic comparative analysis of nine stator current discretization methods within the Model Predictive Current Control (MPCC) framework for Permanent Magnet Synchronous Motors (PMSMs). These methods have generally been examined individually or in limited combinations in previous research, and this holistic and comprehensive comparison constitutes the core contribution of this work by addressing a significant gap in the existing literature. The investigated MPCC methods—Forward Euler (FE), Backward Euler (BE), Midpoint Euler (ME), Fourth-Order Runge–Kutta (RK4), Runge–Kutta Ralston (RKR), Taylor Series (TS), Verlet Integration (VI), Crank–Nicolson (CN), and Adams–Bashforth (AB)—are comprehensively evaluated for their dynamic performance, including speed tracking, torque response, settling time, rise time, overshoot, and Total Harmonic Distortion (THD). Additionally, these analysis results are benchmarked against conventional Proportional–Integral–Derivative (PID) and Field-Oriented Control (FOC) methods. In terms of key performance indicators, the MPCC–RKR method proved optimal for speed tracking under no-load conditions, achieving the lowest overshoot, specifically ranging from 0.097% to 1.450%. Conversely, MPCC–ME and MPCC–CN demonstrated superior transient performance under sudden-load conditions (1.7 Nm), yielding the smallest torque deviations, fastest settling times. Specifically, MPCC-ME recorded the lowest overshoot (1.512%) at the 7 s load step, while MPCC-CN performed best at 9 s (1.220%) and 11 s (1.577%). Among the predictive schemes, the MPCC–RKR method achieved the highest current quality with a minimum THD of 3.69% at nominal speed. Finally, it has been confirmed through the applied statistical analysis techniques that the performance differences among the discretization methods are significant. The comparative analysis examines both the dynamic performance of the methods and the fundamental trade-off between accuracy and computational burden in MPCC design. Simple single-step explicit methods (FE, ME, RKR, VI, AB) offer low computational cost and are well suited for high–sampling-frequency real-time applications, especially with sufficiently small sampling times, whereas more complex multi-step or implicit methods (BE, RK4, TS, CN) may increase the processor load despite their potential gains in accuracy and stability. This study provides practical, evidence-based guidelines for selecting an optimal discretization method by balancing accuracy and dynamic performance requirements for PMSM applications. Full article
21 pages, 4249 KB  
Article
Practical Method for Evaluating the Element Sensitivity Variation of an Ultrasonic Annular Phased Array Transducer
by Zhengxiao Sha, Xiao Liu, Yanze Liu, Xiao Wang and Xiaoming Zhou
Sensors 2026, 26(1), 25; https://doi.org/10.3390/s26010025 (registering DOI) - 19 Dec 2025
Abstract
The unique features of annular phased array transducers, such as ring-shaped elements and the concentric configuration, cause them to behave differently from commonly used linear array transducers, in terms of sound field distribution and pulse–echo response. Consequently, standard techniques for assessing linear array [...] Read more.
The unique features of annular phased array transducers, such as ring-shaped elements and the concentric configuration, cause them to behave differently from commonly used linear array transducers, in terms of sound field distribution and pulse–echo response. Consequently, standard techniques for assessing linear array transducers can introduce significant errors when applied to annular array transducers, especially concerning element-to-element sensitivity variance. This study investigates the consistency of element sensitivity in annular phased array transducers. Through theoretical analysis, a Long-Belt source assumption model was developed based on the Rayleigh integral to characterize the responses of ring-shaped elements in an analytical and explicit form. The model suggests that the response amplitude is linearly correlated with the radial width of the element, which was validated by subsequent numerical simulations. Based on these findings, a modified sensitivity evaluation algorithm for annular array transducers is presented. The response voltage per unit width, rather than the total response voltage, is used to eliminate the influence of varying geometries and sizes across elements. The sensitivity variation of a 32-element annular array transducer was evaluated using the new algorithm. Compared to the uncorrected measurement, the maximum sensitivity variation was reduced significantly from 25 dB to 6 dB, revealing the transducer’s intrinsic consistency despite the different geometric features of each element. Due to its distinct geometry compared to the ring-shaped elements, the central element cannot be corrected or evaluated using this method. These results suggest that the proposed algorithm enables the more accurate evaluation of sensitivity consistency for annular phased array transducers, thereby improving measurement reliability in practical applications. Full article
(This article belongs to the Collection Ultrasound Transducers)
40 pages, 5917 KB  
Article
A Framework for Budget-Constrained Zero-Day Cyber Threat Mitigation: A Knowledge-Guided Reinforcement Learning Approach
by Mainak Basak and Geon-Yun Shin
Sensors 2026, 26(1), 21; https://doi.org/10.3390/s26010021 - 19 Dec 2025
Abstract
Conventional machine-learning-based defenses are unable to generalize well to novel chains of ATT&CK actions. Being inefficient with low telemetry budgets, they are also unable to provide causal explainability and auditing. We propose a knowledge-based cyber-defense framework that integrates ATT&CK constrained model generation, budget-constrained [...] Read more.
Conventional machine-learning-based defenses are unable to generalize well to novel chains of ATT&CK actions. Being inefficient with low telemetry budgets, they are also unable to provide causal explainability and auditing. We propose a knowledge-based cyber-defense framework that integrates ATT&CK constrained model generation, budget-constrained reinforcement learning, and graph-based causal explanation into a single auditable pipeline. The framework formalizes the synthesis of zero-day chains of attacks using a grammar-formalized ATT&CK database and compiles them into the Zeek-aligned witness telemetry. This allows for efficient training of detection using the generated data within limited sensor budgets. The Cyber-Threat Knowledge Graph (CTKG) stores dynamically updated inter-relational semantics between tactics, techniques, hosts, and vulnerabilities. This enhances the decision state using causal relations. The sensor budget policy selects the sensoring and containment decisions within explicit bounds of costs and latency. The inherent defense-provenance features enable a traceable explanation of each generated alarm. Extensive evaluations of the framework using the TTP holdouts of the zero-day instances show remarkable improvements over conventional techniques in terms of low-FPR accuracy, TTD, and calibration. Full article
(This article belongs to the Special Issue Cyber Security and AI—2nd Edition)
13 pages, 1328 KB  
Article
Microneedle-Array-Electrode-Based ECG with PPG Sensor for Cuffless Blood Pressure Estimation
by Zeeshan Haider, Daesoo Kim, Soyoung Yang, Sungmin Lee, Hyunmoon Park and Sungbo Cho
Appl. Sci. 2026, 16(1), 35; https://doi.org/10.3390/app16010035 - 19 Dec 2025
Abstract
Continuous blood pressure (BP) measurement is essential for real-time hypertension management and the prevention of related complications. To address this need, a cuffless BP estimation technique utilizing biosignals from wearable devices has gained significant attention. This study proposes a feasibility approach that integrates [...] Read more.
Continuous blood pressure (BP) measurement is essential for real-time hypertension management and the prevention of related complications. To address this need, a cuffless BP estimation technique utilizing biosignals from wearable devices has gained significant attention. This study proposes a feasibility approach that integrates microneedle array electrodes (MNE) for ECG acquisition with photoplethysmogram (PPG) sensors for cuffless BP estimation. The algorithm employed is a baseline multivariate regression model using PTT and RR intervals, while the novelty lies in the hardware design aimed at improving signal quality and long-term wearability. The algorithm’s performance was validated using the Medical Information Mart for Intensive Care (MIMIC) database, achieving a mean error range of ±5.28 mmHg for the SBP and ±2.81 mmHg for the DBP. Additionally, the comparison with 253 measurements from three volunteers against an automated sphygmomanometer indicated an accuracy within ±25%. Therefore, these findings demonstrate the feasibility of an MNE-based ECG with PPG for BP integration for cuffless monitoring of SBP and DBP in daily life. The MIMIC-based evaluation was performed to verify the feasibility of the regression model under ideal public-database conditions. The volunteer experiment, performed with the developed MNE-ECG hardware, served as a separate preliminary feasibility test to observe hardware behavior in real-world measurements. Full article
42 pages, 849 KB  
Article
Evaluating Pancreatic Cancer Treatment Strategies Using a Novel Polytopic Fuzzy Tensor Approach
by Muhammad Bilal, Chaoqian Li, A. K. Alzahrani and A. K. Aljahdali
Bioengineering 2026, 13(1), 2; https://doi.org/10.3390/bioengineering13010002 (registering DOI) - 19 Dec 2025
Abstract
In response to the growing complexity and uncertainty in real-world decision-making, this study introduces a novel framework based on the polytopic fuzzy tensor (PFT) model, which unifies the geometric structure of polytopes with the representational power of fuzzy tensors. The PFT framework is [...] Read more.
In response to the growing complexity and uncertainty in real-world decision-making, this study introduces a novel framework based on the polytopic fuzzy tensor (PFT) model, which unifies the geometric structure of polytopes with the representational power of fuzzy tensors. The PFT framework is specifically designed to handle high-dimensional, imprecise, and ambiguous information commonly encountered in multi-criteria group decision-making scenarios. To support this framework, we define a suite of algebraic operations, aggregation mechanisms, and theoretical properties tailored to the PFT environment, with comprehensive mathematical formulations and illustrative validations. The effectiveness of the proposed method is demonstrated through a real-world application involving the evaluation of six pancreatic cancer treatment strategies. These alternatives are assessed against five key criteria: quality of life, side effects, treatment accessibility, cost, and duration. Our results reveal that the PFT-based approach outperforms traditional fuzzy decision-making techniques by delivering more consistent, interpretable, and reliable outcomes under uncertainty. Moreover, comparative analysis confirms the model’s superior ability to handle multidimensional expert evaluations and integrate conflicting information. This research contributes a significant advancement in the field of fuzzy decision science by offering a flexible, theoretically sound, and practically applicable tool for complex decision problems. Future work will focus on improving computational performance, adapting the model for real-time data, and exploring broader interdisciplinary applications. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

30 pages, 2583 KB  
Article
Prediction of Water Quality Parameters in the Paraopeba River Basin Using Remote Sensing Products and Machine Learning
by Rafael Luís Silva Dias, Ricardo Santos Silva Amorim, Demetrius David da Silva, Elpídio Inácio Fernandes-Filho, Gustavo Vieira Veloso and Ronam Henrique Fonseca Macedo
Sensors 2026, 26(1), 18; https://doi.org/10.3390/s26010018 - 19 Dec 2025
Abstract
Monitoring surface water quality is essential for assessing water resources and identifying their quality patterns. Traditional monitoring methods, based on conventional point-sampling stations, are reliable but costly and limited in frequency and spatial coverage. These constraints hinder the ability to evaluate water quality [...] Read more.
Monitoring surface water quality is essential for assessing water resources and identifying their quality patterns. Traditional monitoring methods, based on conventional point-sampling stations, are reliable but costly and limited in frequency and spatial coverage. These constraints hinder the ability to evaluate water quality parameters at the temporal and spatial scales required to detect the effects of extreme events on aquatic systems. Satellite imagery offers a viable complementary alternative to enhance the temporal and spatial monitoring scales of traditional assessment methods. However, limitations related to spectral, spatial, temporal, and/or radiometric resolution still pose significant challenges to prediction accuracy. This study aimed to propose a methodology for predicting optically active and inactive water quality parameters in lotic and lentic environments using remote-sensing data and machine-learning techniques. Three remote-sensing datasets were organized and evaluated: (i) data extracted from Sentinel-2 imagery; (ii) data obtained from raw PlanetScope (PS) imagery; and (iii) data from PS imagery normalized using the methodology developed by Dias. Data on water quality parameters were collected from 24 monitoring stations located along the Paraopeba River channel and the Três Marias Reservoir, covering the period from 2016 to 2023. Four machine-learning algorithms were applied to predict water quality parameters: Random Forest, k-Nearest Neighbors, Support Vector Machines with Radial Basis Function Kernel, and Cubist. Model performance was evaluated using four statistical metrics: root-mean-square error, mean absolute error, Lin′s concordance correlation coefficient, and the coefficient of determination. Models based on normalized PS data achieved the best performance in parameter estimation. Additionally, decision-tree-based algorithms showed superior generalization capability, outperforming the other models tested. The proposed methodology proved suitable for this type of analysis, confirming not only the applicability of PS data but also providing relevant insights for its use in diverse environmental-monitoring applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 16381 KB  
Article
Research on Ship Hull Hybrid Surface Mesh Generation Algorithm Based on Ship Surface Curvature Features
by Wenyang Duan, Peixin Zhang, Kuo Yang, Limin Huang, Yuanqing Sun and Jikang Chen
J. Mar. Sci. Eng. 2026, 14(1), 8; https://doi.org/10.3390/jmse14010008 - 19 Dec 2025
Abstract
Mesh generation is a critical preprocessing step in Computational Fluid Dynamics. In ship hydrodynamics, existing mesh generation methods lack adaptability to complex hull surface geometries, necessitating repeated optimization. To address these issues, a new hybrid mesh generation strategy was proposed, integrating Non-Uniform Rational [...] Read more.
Mesh generation is a critical preprocessing step in Computational Fluid Dynamics. In ship hydrodynamics, existing mesh generation methods lack adaptability to complex hull surface geometries, necessitating repeated optimization. To address these issues, a new hybrid mesh generation strategy was proposed, integrating Non-Uniform Rational B-Spline surface interpolation, advancing front technique, hull surface curvature features, and mesh quality evaluation parameters. Firstly, the ship hull surface was partitioned into multiple regions, and each region was assigned a specific mesh type. Subsequently, the adaptively sized mesh was generated based on local curvature variations. Finally, the angle skewness was employed as an objective function to improve the mesh quality. In addition, considering the actual ship model as an example, the mesh generated by our method and conventional Laplacian smoothing method were used to perform first-order potential flow simulations, and the results were compared against the convergence values. The results indicated that our method has lower root mean square errors in computing the total non-viscous force, steady drift force and ship hull free floating Response Amplitude Operator. This method is applicable to numerical simulations of the ship potential flow, providing high-quality hull meshes for hydrodynamic analysis. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 9084 KB  
Article
Quantifying Torrential Watershed Behavior over Time: A Synergistic Approach Using Classical and Modern Techniques
by Ana M. Petrović, Laure Guerit, Valentina Nikolova, Ivan Novković, Dobromir Filipov and Jiří Jakubínský
Earth 2026, 7(1), 1; https://doi.org/10.3390/earth7010001 - 19 Dec 2025
Abstract
This study investigates temporal and spatial variation in torrential flood hazards and sediment dynamics in two ungauged watersheds in southeastern Serbia from 1991 to 2023. By integrating classical hydrological models with modern geospatial and photogrammetric techniques, watershed responses to environmental and anthropogenic changes [...] Read more.
This study investigates temporal and spatial variation in torrential flood hazards and sediment dynamics in two ungauged watersheds in southeastern Serbia from 1991 to 2023. By integrating classical hydrological models with modern geospatial and photogrammetric techniques, watershed responses to environmental and anthropogenic changes are quantified. Torrential flood potential was estimated and peak discharges were calculated using both the rational and SCS-Unit hydrograph methods, while sediment transport was assessed through Gavrilović’s erosion potential model and a modified Poljakov model. A key innovation is the use of UAV-based and close-range photogrammetry for 3D grain-size analysis, marking the first such application in Serbia. The mean torrential flood potential decreased by 4.4% in the Petrova Watershed and 4.2% in the Rasnička Watershed. Specific peak discharges for a 100-year return period declined from 1.62 to 1.07 m3·s−1·km−2 in Petrova and from 1.60 to 1.34 m3·s−1·km−2 in Rasnička. Sediment transport during a 1% probability flood was reduced from 4.97 to 2.53 m3·s−1 in Petrova and from 13.87 to 9.48 m3·s−1 in Rasnička. Grain-size analyses revealed immobile coarse bedload in the Petrova and active sediment transport in the Rasnička River, where D50 and D90 decreased between 2023 and 2024. The findings highlight the effectiveness of a synergistic methodological approach for analyzing complex watershed processes in data-scarce regions. The study provides a replicable model for flood hazard assessment and erosion control planning in similar mountainous environments undergoing socio-environmental transitions. Full article
Show Figures

Figure 1

39 pages, 9543 KB  
Article
A Hybrid PCA-TOPSIS and Machine Learning Approach to Basin Prioritization for Sustainable Land and Water Management
by Mustafa Aytekin, Semih Ediş and İbrahim Kaya
Water 2026, 18(1), 5; https://doi.org/10.3390/w18010005 - 19 Dec 2025
Abstract
Population expansion, urban development, climate change, and precipitation patterns are complicating sustainable natural resource management. Subbasin prioritization enhances the efficiency and cost-effectiveness of resource management. Artificial intelligence and data analytics eradicate the constraints of traditional methodologies, facilitating more precise evaluations of soil erosion, [...] Read more.
Population expansion, urban development, climate change, and precipitation patterns are complicating sustainable natural resource management. Subbasin prioritization enhances the efficiency and cost-effectiveness of resource management. Artificial intelligence and data analytics eradicate the constraints of traditional methodologies, facilitating more precise evaluations of soil erosion, water management, and environmental risks. This research has created a comprehensive decision support system for the multidimensional assessment of sub-basins. The Erosion and Flood Risk-Based Soil Protection (EFR), Socio-Economic Integrated Basin Management (SEW), and Prioritization Based on Basin Water Yield (PBW) functions were utilized to prioritize sustainability objectives. EFR addresses erosion and flood risks, PBW evaluates water yield potential, and SEW integrates socio-economic drivers that directly influence water use and management feasibility. Our approach integrates principal component analysis–technique for order preference by similarity to ideal solution (PCA–TOPSIS) with machine learning (ML) and provides a scalable, data-driven alternative to conventional methods. The combination of machine learning algorithms with PCA and TOPSIS not only improves analytical capabilities but also offers a scalable alternative for prioritization under changing data scenarios. Among the models, support vector machine (SVM) achieved the highest performance for PBW (R2 = 0.87) and artificial neural networks (ANNs) performed best for EFR (R2 = 0.71), while random forest (RF) and gradient boosting machine (GBM) models exhibited stable accuracy for SEW (R2 ~ 0.65–0.69). These quantitative results confirm the robustness and consistency of the proposed hybrid framework. The findings show that some sub-basins are prioritized for sustainable land and water resources management; these areas are generally of high priority according to different risk and management criteria. For these basins, it is suggested that comprehensive local-scale studies be carried out, making sure that preventive and remedial measures are given top priority for execution. The SVM model worked best for the PBW function, the ANN model worked best for the EFR function, and the RF and GBM models worked best for the SEW function. This framework not only finds sub-basins that are most important, but it also gives useful information for managing watersheds in a way that is sustainable even when the climate and economy change. Full article
(This article belongs to the Special Issue Application of Machine Learning in Hydrologic Sciences)
Show Figures

Figure 1

30 pages, 16196 KB  
Article
In Silico Optimization of Inhibitors of the 3-Chymotrypsin-like Protease of SARS-CoV-2
by Issouf Fofana, Brice Dali, Mawa Koné, Katarina Sujova, Eugene Megnassan, Stanislav Miertus and Vladimir Frecer
Life 2026, 16(1), 6; https://doi.org/10.3390/life16010006 - 19 Dec 2025
Abstract
In this study, new improved inhibitors of the viral enzyme 3-chymotrypsin-like protease (3CLpro) were designed using structure-based drug design techniques in an effort to discover more effective treatment of coronavirus disease 2019 (COVID-19). Three-dimensional models of 3CLpro–inhibitor complexes were [...] Read more.
In this study, new improved inhibitors of the viral enzyme 3-chymotrypsin-like protease (3CLpro) were designed using structure-based drug design techniques in an effort to discover more effective treatment of coronavirus disease 2019 (COVID-19). Three-dimensional models of 3CLpro–inhibitor complexes were prepared by in situ modification of the crystal structure of the submicromolar covalent inhibitor IPCL6 for a set of 25 known inhibitors with published inhibitory potencies (IC50exp). The QSAR model was prepared with a reasonable correlation between the calculated free energies of formation of the 3CLpro-IPCL complex (∆∆Gcom) and the experimentally determined activities IC50exp, which explained approximately 92% of the variation in the 3CLpro inhibition data. A similar agreement was achieved for the QSAR pharmacophore model (PH4) built on the basis of the active conformations of the IPCL inhibitors bound at the active site of the 3CLpro. The virtual combinatorial library of more than 567,000 IPCL analogues was screened in silico using the PH4 model and resulted in the identification of 39 promising analogues. The best inhibitors designed in this study show high predicted affinity for the 3CLpro protease, as well as favourable predicted ADME properties. For the best new virtual inhibitor candidate IPCL 80-27-74-4, the inhibitory concentration IC50pre was predicted equal to 0.8 nM, which represents a significant improvement in the inhibitory potency of known IPCLs. Ultimately, molecular dynamics simulations of the 12 newly designed top-scoring IPCL inhibitors demonstrated that the 3CLpro–inhibitor complexes exhibited good structural stability, confirming the potential for further development of the designed IPCL analogues. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

23 pages, 5674 KB  
Article
OH* 3D Concentration Measurement of Non-Axisymmetric Flame via Near-Ultraviolet Volumetric Emission Tomography
by Junhui Ma, Lingxue Wang, Dongqi Chen, Dezhi Zheng, Guoguo Kang and Yi Cai
Sensors 2026, 26(1), 9; https://doi.org/10.3390/s26010009 - 19 Dec 2025
Abstract
Measuring the three-dimensional (3D) concentration of the ubiquitous intermediate OH* across combustion systems, spanning carbon-based fuels to zero-carbon alternatives such as H2 and NH3, provides vital insights into flame topology, reaction pathways, and emission formation mechanisms. Optical imaging methods have [...] Read more.
Measuring the three-dimensional (3D) concentration of the ubiquitous intermediate OH* across combustion systems, spanning carbon-based fuels to zero-carbon alternatives such as H2 and NH3, provides vital insights into flame topology, reaction pathways, and emission formation mechanisms. Optical imaging methods have attracted vital interests due to non-intrusiveness in the combustion process. However, achieving accurate 3D concentration of OH* via imaging in non-axisymmetric flames remains challenging. This work presents a near-ultraviolet (NUV) volumetric emission tomography-based OH* measuring method that integrates a three-layer OH* imaging model, a calibration procedure utilizing narrow-band NUV radiometry, and a threshold-constrained Local Filtered Back-Projection Simultaneous Algebraic Reconstruction Technique (LFBP-SART) algorithm. When applied to a non-axisymmetric Bunsen flame, the method reveals multiple small flame structures matching the fairing pattern in the reconstructed 3D OH* field, with a maximum OH* molar concentration of approximately 0.04 mol/m3 and an overall relative uncertainty of about 8.7%. Given its straightforward requirements, this technique is considered adaptable to other free radicals. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies—Second Edition)
Show Figures

Figure 1

29 pages, 3634 KB  
Article
Human–AI Complementarity in Peer Review: Empirical Analysis of PeerJ Data and Design of an Efficient Collaborative Review Framework
by Zhihe Yang, Xiaoyu Zhou, Yuxin Jiang, Xinjie Zhang, Qihui Gao, Yanzhu Lu and Anqi Yang
Publications 2026, 14(1), 1; https://doi.org/10.3390/publications14010001 - 19 Dec 2025
Abstract
In response to the persistent imbalance between the rapid expansion of scholarly publishing and the constrained availability of qualified reviewers, an empirical investigation was conducted to examine the feasibility and boundary conditions of employing Large Language Models (LLMs) in journal peer review. A [...] Read more.
In response to the persistent imbalance between the rapid expansion of scholarly publishing and the constrained availability of qualified reviewers, an empirical investigation was conducted to examine the feasibility and boundary conditions of employing Large Language Models (LLMs) in journal peer review. A parallel corpus of 493 pairs of human expert reviews and GPT-4o-generated reviews was constructed from the open peer-review platform PeerJ Computer Science. Analytical techniques, including keyword co-occurrence analysis, sentiment and subjectivity assessment, syntactic complexity measurement, and n-gram distributional entropy analysis, were applied to compare cognitive patterns, evaluative tendencies, and thematic coverage between human and AI reviewers. The results indicate that human and AI reviews exhibit complementary functional orientations. Human reviewers were observed to provide integrative and socially contextualized evaluations, while AI reviews emphasized structural verification and internal consistency, especially regarding the correspondence between abstracts and main texts. Contrary to the assumption of excessive leniency, GPT-4o-generated reviews demonstrated higher critical density and functional rigor, maintaining substantial topical alignment with human feedback. Based on these findings, a collaborative human–AI review framework is proposed, in which AI systems are positioned as analytical assistants that conduct structured verification prior to expert evaluation. Such integration is expected to enhance the efficiency, consistency, and transparency of the peer-review process and to promote the sustainable development of scholarly communication. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

19 pages, 2690 KB  
Article
Pattern Learning and Knowledge Distillation for Single-Cell Data Annotation
by Ming Zhang, Boran Ren and Xuedong Li
Biology 2026, 15(1), 2; https://doi.org/10.3390/biology15010002 - 19 Dec 2025
Abstract
Transferring cell type annotations from reference dataset to query dataset is a fundamental problem in AI-based single-cell data analysis. However, single-cell measurement techniques lead to domain gaps between multiple batches or datasets. The existing deep learning methods lack consideration on batch integration when [...] Read more.
Transferring cell type annotations from reference dataset to query dataset is a fundamental problem in AI-based single-cell data analysis. However, single-cell measurement techniques lead to domain gaps between multiple batches or datasets. The existing deep learning methods lack consideration on batch integration when learning reference annotations, which is a challenge for cell type annotation on multiple query batches. For cell representation, batch integration can not only eliminate the gaps between batches or datasets but also improve the heterogeneity of cell clusters. In this study, we proposed PLKD, a cell type annotation method based on pattern learning and knowledge distillation. PLKD consists of Teacher (Transformer) and Student (MLP). Teacher groups all input genes (features) into different gene sets (patterns), and each pattern represents a specific biological function. This design enables model to focus on biologically relevant functions interaction rather than gene-level expression that is susceptible to gaps of batches. In addition, knowledge distillation makes lightweight Student resistant to noise, allowing Student to infer quickly and robustly. Furthermore, PLKD supports multi-modal cell type annotation, multi-modal integration and other tasks. Benchmark experiments demonstrate that PLKD is able to achieve accurate and robust cell type annotation. Full article
Show Figures

Figure 1

15 pages, 3996 KB  
Article
3D-Printed Ceramic Solutions for Passive Cooling and CO2 Adsorption: Investigating Material and Fabrication Parameters in LDM for New Eco-Sustainable Design Paradigms
by Vaia Tsiokou, Despoina Antypa, Anna Karatza and Elias P. Koumoulos
Sustainability 2026, 18(1), 13; https://doi.org/10.3390/su18010013 - 19 Dec 2025
Abstract
This study investigates the materials and fabrication selection criteria for 3D-printed aluminosilicate components aimed for passive cooling and CO2 adsorption in indoor conditions, considering their manufacturing environmental impact. The dual-function components were fabricated using Liquid Deposition Modelling (LDM), an Additive Manufacturing (AM) [...] Read more.
This study investigates the materials and fabrication selection criteria for 3D-printed aluminosilicate components aimed for passive cooling and CO2 adsorption in indoor conditions, considering their manufacturing environmental impact. The dual-function components were fabricated using Liquid Deposition Modelling (LDM), an Additive Manufacturing (AM) technique utilising customised slurry-based feedstock materials. To assess the environmental implications of the production process, the study employs the Life Cycle Assessment (LCA) methodology, a standardised framework used to quantify potential environmental impacts across the product’s life cycle. The study outlines a systematic approach to materials and fabrication processes selection, focusing on the functional properties required, the importance of locally sourced materials, and the constraints imposed by the fabrication techniques. The fabrication methodology was analysed for material/energy efficiency and waste generation. Post-processing stages were evaluated to identify opportunities for energy savings, particularly by exploring Low-Temperature Firing (LTF). The selected criteria proved efficient in enhancing shaping control and minimising shrinkage variability, with a recorded weight loss of 3.04% via LTF. The LCA results indicated that the 23% reduction in climate change impact was primarily driven by the lower electricity demand of the LTF Protocol, demonstrating that energy-efficient post-processing is a critical lever for sustainable ceramic fabrication. Full article
(This article belongs to the Special Issue 3D Printing for Multifunctional Applications and Sustainability)
Show Figures

Graphical abstract

Back to TopTop