Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (663)

Search Parameters:
Keywords = primary calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3075 KiB  
Article
Building an Agent-Based Simulation Framework of Smartphone Reuse and Recycling: Integrating Privacy Concern and Behavioral Norms
by Wenbang Hou, Dingjie Peng, Jianing Chu, Yuelin Jiang, Yu Chen and Feier Chen
Sustainability 2025, 17(15), 6885; https://doi.org/10.3390/su17156885 - 29 Jul 2025
Viewed by 122
Abstract
The rapid proliferation of electronic waste, driven by the short lifecycle of smartphones and planned obsolescence strategies, presents escalating global environmental challenges. To address these issues from a systems perspective, this study develops an agent-based modeling (ABM) framework that simulates consumer decisions and [...] Read more.
The rapid proliferation of electronic waste, driven by the short lifecycle of smartphones and planned obsolescence strategies, presents escalating global environmental challenges. To address these issues from a systems perspective, this study develops an agent-based modeling (ABM) framework that simulates consumer decisions and stakeholder interactions within the smartphone reuse and recycling ecosystem. The model incorporates key behavioral drivers—privacy concerns, moral norms, and financial incentives—to examine how social and economic factors shape consumer behavior. Four primary agent types—consumers, manufacturers, recyclers, and second-hand retailers—are modeled to capture complex feedback and market dynamics. Calibrated using empirical data from Jiangsu Province, China, the simulation reveals a dominant consumer tendency to store obsolete smartphones rather than engage in reuse or formal recycling. However, the introduction of government subsidies significantly shifts behavior, doubling participation in second-hand markets and markedly improving recycling rates. These results highlight the value of integrating behavioral insights into environmental modeling to inform circular economy strategies. By offering a flexible and behaviorally grounded simulation tool, this study supports the design of more effective policies for promoting responsible smartphone disposal and lifecycle extension. Full article
Show Figures

Graphical abstract

16 pages, 2370 KiB  
Article
SemABC: Semantic-Guided Adaptive Bias Calibration for Generative Zero-Shot Point Cloud Segmentation
by Yuyun Wei and Meng Qi
Appl. Sci. 2025, 15(15), 8359; https://doi.org/10.3390/app15158359 - 27 Jul 2025
Viewed by 326
Abstract
Due to the limited quantity and high cost of high-quality three-dimensional annotations, generalized zero-shot point cloud segmentation aims to transfer the knowledge of seen to unseen classes by leveraging semantic correlations to achieve generalization purposes. Existing generative point cloud semantic segmentation approaches rely [...] Read more.
Due to the limited quantity and high cost of high-quality three-dimensional annotations, generalized zero-shot point cloud segmentation aims to transfer the knowledge of seen to unseen classes by leveraging semantic correlations to achieve generalization purposes. Existing generative point cloud semantic segmentation approaches rely on generators trained on seen classes to synthesize visual features for unseen classes in order to help the segmentation model gain the ability of generalization, but this often leads to a bias toward seen classes. To address this issue, we propose a semantic-guided adaptive bias calibration approach with a dual-branch network architecture. This network consists of a novel visual–semantic fusion branch alongside the primary segmentation branch to suppress the bias toward seen classes. Specifically, the visual–semantic branch exploits the visual–semantic relevance of the synthetic features of unseen classes to provide auxiliary predictions. Furthermore, we introduce an adaptive bias calibration module that dynamically integrates the predictions from both the main and auxiliary branches to achieve unbiased segmentation results. Extensive experiments conducted on standard benchmarks demonstrate that our approach significantly outperforms state-of-the-art methods on both seen and unseen classes, thereby validating the effectiveness of our approach. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

13 pages, 1718 KiB  
Article
Accurate Dual-Channel Broadband RF Attenuation Measurement System with High Attenuation Capability Using an Optical Fiber Assembly for Optimal Channel Isolation
by Anton Widarta
Electronics 2025, 14(15), 2963; https://doi.org/10.3390/electronics14152963 - 24 Jul 2025
Viewed by 154
Abstract
In this study, an accurate attenuation measurement system with high attenuation capability (≥100 dB) is presented, covering a broad radio frequency range from 1 GHz to 25 GHz. The system employs a dual-channel intermediate frequency (IF) substitution method, utilizing a programmable inductive voltage [...] Read more.
In this study, an accurate attenuation measurement system with high attenuation capability (≥100 dB) is presented, covering a broad radio frequency range from 1 GHz to 25 GHz. The system employs a dual-channel intermediate frequency (IF) substitution method, utilizing a programmable inductive voltage divider (IVD) that provides precise voltage ratios at a 1 kHz operating IF, serving as the primary attenuation standard. To ensure optimal inter-channel isolation, essential for accurate high-attenuation measurements, an optical fiber assembly, consisting of a laser diode, a wideband external electro-optic modulator, and a photodetector, is integrated between the channels. A comprehensive performance evaluation is presented, with particular emphasis on the programmable IVD calibration technique, which achieves an accuracy better than 0.001 dB across all attenuation levels, and on the role of the optical fiber assembly in enhancing isolation, demonstrating levels exceeding 120 dB across the entire frequency range. The system demonstrates measurement capabilities with expanded uncertainties (k = 2) of 0.004 dB, 0.008 dB, and 0.010 dB at attenuation levels of 20 dB, 60 dB, and 100 dB, respectively. Full article
(This article belongs to the Special Issue RF/MM-Wave Circuits Design and Applications, 2nd Edition)
Show Figures

Figure 1

31 pages, 4435 KiB  
Article
A Low-Cost IoT Sensor and Preliminary Machine-Learning Feasibility Study for Monitoring In-Cabin Air Quality: A Pilot Case from Almaty
by Nurdaulet Tasmurzayev, Bibars Amangeldy, Gaukhar Smagulova, Zhanel Baigarayeva and Aigerim Imash
Sensors 2025, 25(14), 4521; https://doi.org/10.3390/s25144521 - 21 Jul 2025
Viewed by 408
Abstract
The air quality within urban public transport is a critical determinant of passenger health. In the crowded and poorly ventilated cabins of Almaty’s metro, buses, and trolleybuses, concentrations of CO2 and PM2.5 often accumulate, elevating the risk of respiratory and cardiovascular [...] Read more.
The air quality within urban public transport is a critical determinant of passenger health. In the crowded and poorly ventilated cabins of Almaty’s metro, buses, and trolleybuses, concentrations of CO2 and PM2.5 often accumulate, elevating the risk of respiratory and cardiovascular diseases. This study investigates the air quality along three of the city’s busiest transport corridors, analyzing how the concentrations of CO2, PM2.5, and PM10, as well as the temperature and relative humidity, fluctuate with the passenger density and time of day. Continuous measurements were collected using the Tynys mobile IoT device, which was bench-calibrated against a commercial reference sensor. Several machine learning models (logistic regression, decision tree, XGBoost, and random forest) were trained on synchronized environmental and occupancy data, with the XGBoost model achieving the highest predictive accuracy at 91.25%. Our analysis confirms that passenger occupancy is the primary driver of in-cabin pollution and that these machine learning models effectively capture the nonlinear relationships among environmental variables. Since the surveyed routes serve Almaty’s most densely populated districts, improving the ventilation on these lines is of immediate importance to public health. Furthermore, the high-temporal-resolution data revealed short-term pollution spikes that correspond with peak ridership, advancing the current understanding of exposure risks in transit. These findings highlight the urgent need to combine real-time monitoring with ventilation upgrades. They also demonstrate the practical value of using low-cost IoT technologies and data-driven analytics to safeguard public health in urban mobility systems. Full article
(This article belongs to the Special Issue IoT-Based Sensing Systems for Urban Air Quality Forecasting)
Show Figures

Figure 1

34 pages, 3579 KiB  
Review
A Comprehensive Review of Mathematical Error Characterization and Mitigation Strategies in Terrestrial Laser Scanning
by Mansoor Sabzali and Lloyd Pilgrim
Remote Sens. 2025, 17(14), 2528; https://doi.org/10.3390/rs17142528 - 20 Jul 2025
Viewed by 381
Abstract
In recent years, there has been an increasing transition from 1D point-based to 3D point-cloud-based data acquisition for monitoring applications and deformation analysis tasks. Previously, many studies relied on point-to-point measurements using total stations to assess structural deformation. However, the introduction of terrestrial [...] Read more.
In recent years, there has been an increasing transition from 1D point-based to 3D point-cloud-based data acquisition for monitoring applications and deformation analysis tasks. Previously, many studies relied on point-to-point measurements using total stations to assess structural deformation. However, the introduction of terrestrial laser scanning (TLS) has commenced a new era in data capture with a high level of efficiency and flexibility for data collection and post processing. Thus, a robust understanding of both data acquisition and processing techniques is required to guarantee high-quality deliverables to geometrically separate the measurement uncertainty and movements. TLS is highly demanding in capturing detailed 3D point coordinates of a scene within either short- or long-range scanning. Although various studies have examined scanner misalignments under controlled conditions within the short range of observation (scanner calibration), there remains a knowledge gap in understanding and characterizing errors related to long-range scanning (scanning calibration). Furthermore, limited information on manufacturer-oriented calibration tests highlights the motivation for designing a user-oriented calibration test. This research focused on investigating four primary sources of error in the generic error model of TLS. These were categorized into four geometries: instrumental imperfections related to the scanner itself, atmospheric effects that impact the laser beam, scanning geometry concerning the setup and varying incidence angles during scanning, and object and surface characteristics affecting the overall data accuracy. This study presents previous findings of TLS calibration relevant to the four error sources and mitigation strategies and identified current challenges that can be implemented as potential research directions. Full article
Show Figures

Figure 1

31 pages, 1161 KiB  
Article
In Pursuit of Samuelson for Commodity Futures: How to Parameterize and Calibrate the Term Structure of Volatilities
by Roza Galeeva
Commodities 2025, 4(3), 13; https://doi.org/10.3390/commodities4030013 - 18 Jul 2025
Viewed by 203
Abstract
The phenomenon of rising forward price volatility, both historical and implied, as maturity approaches is referred to as the Samuelson effect or maturity effect. Disregarding this effect leads to significant mispricing of early-exercise options, extendible options, or other path-dependent options. The primary objective [...] Read more.
The phenomenon of rising forward price volatility, both historical and implied, as maturity approaches is referred to as the Samuelson effect or maturity effect. Disregarding this effect leads to significant mispricing of early-exercise options, extendible options, or other path-dependent options. The primary objective of the research is to identify a practical way to incorporate the Samuelson effect into the evaluation of commodity derivatives. We choose to model the instantaneous variance employing the exponential decay parameterizations of the Samuelson effect. We develop efficient calibration techniques utilizing historical futures data and conduct an analysis of statistical errors to provide a benchmark for model performance. The study employs 15 years of data for WTI, Brent, and NG, producing excellent results, with the fitting error consistently inside the statistical error, except for the 2020 crisis period. We assess the stability of the fitted parameters via cross-validation techniques and examine the model’s out-of-sample efficacy. The approach is generalized to encompass seasonal commodities, such as natural gas and electricity. We illustrate the application of the calibrated model of instantaneous variance for the evaluation of commodity derivatives, including swaptions, as well as in the evaluation of power purchase agreements (PPAs). We demonstrate a compelling application of the Samuelson effect to a widely utilized auto-callable equity derivative known as the snowball. Full article
Show Figures

Figure 1

36 pages, 2877 KiB  
Article
Dual-Oriented Targeted Nanostructured SERS Label-Free Immunosensor for Detection, Quantification, and Analysis of Breast Cancer Biomarker Concentrations in Blood Serum
by Mohammad E. Khosroshahi, Christine Gaoiran, Vithurshan Umashanker, Hayagreev Veeru and Pranav Panday
Biosensors 2025, 15(7), 447; https://doi.org/10.3390/bios15070447 - 11 Jul 2025
Viewed by 350
Abstract
In clinical applications of surface-enhanced Raman spectroscopy (SERS) immunosensors, accurately determining analyte biomarker concentrations is essential. This study presents a non-invasive approach for quantifying various breast cancer biomarkers—including human epidermal growth factor receptor II (HER-II) (2+, 3+ (I), 3+ (II), 3+ (III), and [...] Read more.
In clinical applications of surface-enhanced Raman spectroscopy (SERS) immunosensors, accurately determining analyte biomarker concentrations is essential. This study presents a non-invasive approach for quantifying various breast cancer biomarkers—including human epidermal growth factor receptor II (HER-II) (2+, 3+ (I), 3+ (II), 3+ (III), and positive IV) and CA 15-3—using a directional, plasmonically active, label-free SERS sensor. Each stage of sensor functionalization, conjugation, and biomarker interaction was verified by UV–Vis spectroscopy. Atomic force microscopy (AFM) characterized the morphology of gold nanourchin (GNU)-immobilized printed circuit board (PCB) substrates. An enhancement factor of ≈ 0.5 × 105 was achieved using Rhodamine 6G as the probe molecule. Calibration curves were initially established using standard HER-II solutions at concentrations ranging from 1 to 100 ng/mL and CA 15-3 at concentrations from 10 to 100 U/mL. The SERS signal intensities in the 620–720 nm region were plotted against concentration, yielding linear sensitivity with R2 values of 0.942 and 0.800 for HER-II and CA15-3, respectively. The same procedure was applied to breast cancer serum (BCS) samples, allowing unknown biomarker concentrations to be determined based on the corresponding calibration curves. SERS data were processed using the filtfilt filter from scipy.signal for smoothing and then baseline-corrected with the Improved Asymmetric Least Squares (IASLS) algorithm from the pybaselines.Whittaker library. Principal Component Analysis (PCA) effectively distinguished the sample groups and revealed spectral differences before and after biomarker interactions. Key Raman peaks were attributed to functional groups including N–H (primary and secondary amines), C–H antisymmetric stretching, C–N (amines), C=O antisymmetric stretching, NH3+ (amines), carbohydrates, glycine, alanine, amides III, C=N stretches, and NH2 in primary amides. Full article
Show Figures

Figure 1

19 pages, 5180 KiB  
Article
In-Flight Calibration of Geostationary Meteorological Imagers Using Alternative Methods: MTG-I1 FCI Case Study
by Ali Mousivand, Christoph Straif, Alessandro Burini, Mounir Lekouara, Vincent Debaecker, Tim Hewison, Stephan Stock and Bojan Bojkov
Remote Sens. 2025, 17(14), 2369; https://doi.org/10.3390/rs17142369 - 10 Jul 2025
Viewed by 435
Abstract
The Flexible Combined Imager (FCI), developed as the next-generation imager for the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Meteosat Third Generation (MTG) satellite series, represents a significant advancement over its predecessor, SEVIRI, on the Meteosat Second Generation (MSG) satellites. FCI [...] Read more.
The Flexible Combined Imager (FCI), developed as the next-generation imager for the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Meteosat Third Generation (MTG) satellite series, represents a significant advancement over its predecessor, SEVIRI, on the Meteosat Second Generation (MSG) satellites. FCI offers more spectral bands, higher spatial resolution, and faster imaging capabilities, supporting a wide range of applications in weather forecasting, climate monitoring, and environmental analysis. On 13 January 2024, the FCI onboard MTG-I1 (renamed Meteosat-12 in December 2024) experienced a critical anomaly involving the failure of its onboard Calibration and Obturation Mechanism (COM). As a result, the use of the COM was discontinued to preserve operational safety, leaving the instrument dependent on alternative calibration methods. This loss of onboard calibration presents immediate challenges, particularly for the infrared channels, including image artifacts (e.g., striping), reduced radiometric accuracy, and diminished stability. To address these issues, EUMETSAT implemented an external calibration approach leveraging algorithms from the Global Space-based Inter-Calibration System (GSICS). The inter-calibration algorithm transfers stable and accurate calibration from the Infrared Atmospheric Sounding Interferometer (IASI) hyperspectral instrument aboard Metop-B and Metop-C satellites to FCI’s infrared channels daily, ensuring continued data quality. Comparisons with Cross-track Infrared Sounder (CrIS) data from NOAA-20 and NOAA-21 satellites using a similar algorithm is then used to validate the radiometric performance of the calibration. This confirms that the external calibration method effectively compensates for the absence of onboard blackbody calibration for the infrared channels. For the visible and near-infrared channels, slower degradation rates and pre-anomaly calibration ensure continued accuracy, with vicarious calibration expected to become the primary source. This adaptive calibration strategy introduces a novel paradigm for in-flight calibration of geostationary instruments and offers valuable insights for satellite missions lacking onboard calibration devices. This paper details the COM anomaly, the external calibration process, and the broader implications for future geostationary satellite missions. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

20 pages, 4752 KiB  
Article
Designing an AI-Supported Framework for Literary Text Adaptation in Primary Classrooms
by Savvas A. Chatzichristofis, Alexandros Tsopozidis, Avgousta Kyriakidou-Zacharoudiou, Salomi Evripidou and Angelos Amanatiadis
AI 2025, 6(7), 150; https://doi.org/10.3390/ai6070150 - 8 Jul 2025
Viewed by 553
Abstract
Background/Objectives: This paper introduces a pedagogically grounded framework for transforming canonical literary texts in primary education through generative AI. Guided by multiliteracies theory, Vygotskian pedagogy, and epistemic justice, the system aims to enhance interpretive literacy, developmental alignment, and cultural responsiveness among learners aged [...] Read more.
Background/Objectives: This paper introduces a pedagogically grounded framework for transforming canonical literary texts in primary education through generative AI. Guided by multiliteracies theory, Vygotskian pedagogy, and epistemic justice, the system aims to enhance interpretive literacy, developmental alignment, and cultural responsiveness among learners aged 7–12. Methods: The proposed system enables educators to perform age-specific text simplification, visual re-narration, lexical reinvention, and multilingual augmentation through a suite of modular tools. Central to the design is the Ethical–Pedagogical Validation Layer (EPVL), a GPT-powered auditing module that evaluates AI-generated content across four normative dimensions: developmental appropriateness, cultural sensitivity, semantic fidelity, and ethical transparency. Results: The framework was fully implemented and piloted with primary educators (N = 8). The pilot demonstrated high usability, curricular alignment, and perceived value for classroom application. Unlike commercial Large Language Models (LLMs), the system requires no prompt engineering and supports editable, policy-aligned controls for normative localization. Conclusions: By embedding ethical evaluation within the generative loop, the framework fosters calibrated trust in human–AI collaboration and mitigates cultural stereotyping and ideological distortion. It advances a scalable, inclusive model for educator-centered AI integration, offering a new pathway for explainable and developmentally appropriate AI use in literary education. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

23 pages, 6745 KiB  
Article
Crushing Modeling and Crushing Characterization of Silage Caragana korshinskii Kom.
by Wenhang Liu, Zhihong Yu, Aorigele, Qiang Su, Xuejie Ma and Zhixing Liu
Agriculture 2025, 15(13), 1449; https://doi.org/10.3390/agriculture15131449 - 5 Jul 2025
Viewed by 346
Abstract
Caragana korshinskii Kom. (CKB), widely cultivated in Inner Mongolia, China, has potential for silage feed development due to its favorable nutritional characteristics, including a crude protein content of 14.2% and a neutral detergent fiber content below 55%. However, its vascular bundle fiber structure [...] Read more.
Caragana korshinskii Kom. (CKB), widely cultivated in Inner Mongolia, China, has potential for silage feed development due to its favorable nutritional characteristics, including a crude protein content of 14.2% and a neutral detergent fiber content below 55%. However, its vascular bundle fiber structure limits the efficiency of lactic acid conversion and negatively impacts silage quality, which can be improved through mechanical crushing. Currently, conventional crushing equipment generally suffers from uneven particle size distribution, high energy consumption, and low processing efficiency. In this study, a layered aggregate model was constructed using the discrete element method (DEM), and the Hertz–Mindlin with Bonding contact model was employed to characterize the heterogeneous mechanical properties between the epidermis and the core. Model accuracy was enhanced through reverse engineering and a multi-particle-size filling strategy. Key parameters were optimized via a Box–Behnken experimental design, with a core normal stiffness of 7.37 × 1011 N·m−1, a core shear stiffness of 9.46 × 1010 N·m−1, a core shear stress of 2.52 × 108 Pa, and a skin normal stiffness of 4.01 × 109 N·m−1. The simulated values for bending, tensile, and compressive failure forces had relative errors of less than 10% compared to experimental results. The results showed that rectangular hammers, due to their larger contact area and more uniform stress distribution, reduced the number of residual bonded contacts by 28.9% and 26.5% compared to stepped and blade-type hammers, respectively. Optimized rotational speed improved dynamic crushing efficiency by 41.3%. The material exhibited spatial heterogeneity, with the mass proportion in the tooth plate impact area reaching 43.91%, which was 23.01% higher than that in the primary hammer crushing area. The relative error between the simulation and bench test results for the crushing rate was 6.18%, and the spatial distribution consistency reached 93.6%, verifying the reliability of the DEM parameter calibration method. This study provides a theoretical basis for the structural optimization of crushing equipment, suppression of circulation layer effects, and the realization of low-energy, high-efficiency processing. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

15 pages, 1518 KiB  
Article
Simulation of Plasma Level Changes in Cerivastatin and Its Metabolites, Particularly Cerivastatin Lactone, Induced by Coadministration with CYP2C8 Inhibitor Gemfibrozil, CYP3A4 Inhibitor Itraconazole, or Both, Using the Metabolite-Linked Model
by Katsumi Iga
Drugs Drug Candidates 2025, 4(3), 34; https://doi.org/10.3390/ddc4030034 - 4 Jul 2025
Viewed by 351
Abstract
Background/Objective: Cerivastatin (Cer), a cholesterol-lowering statin, was withdrawn from the market due to fatal cases of rhabdomyolysis, particularly when co-administered with gemfibrozil (Gem), a strong CYP2C8 inhibitor. However, the pharmacokinetic (PK) mechanisms underlying these adverse events remain unclear. This study investigates the impact [...] Read more.
Background/Objective: Cerivastatin (Cer), a cholesterol-lowering statin, was withdrawn from the market due to fatal cases of rhabdomyolysis, particularly when co-administered with gemfibrozil (Gem), a strong CYP2C8 inhibitor. However, the pharmacokinetic (PK) mechanisms underlying these adverse events remain unclear. This study investigates the impact of drug–drug interactions (DDIs) involving Gem and itraconazole (Itr), a potent CYP3A4 inhibitor, on plasma concentrations of Cer and its major metabolites—M23, M1, and cerivastatin lactone (Cer-L)—with a focus on the risk of excessive Cer-L accumulation. Methods: We applied a newly developed Metabolite-Linked Model that simultaneously characterizes parent drug and metabolite kinetics by estimating metabolite formation fractions (fM) and elimination rate constants (KeM). The model was calibrated using observed DDI data from Cer + Gem and Cer + Itr scenarios and then used to predict outcomes in an untested Cer + Gem + Itr combination. Results: The model accurately reproduced observed metabolite profiles in single-inhibitor DDIs. Predicted AUCR values for Cer-L were 4.2 (Cer + Gem) and 2.1 (Cer + Itr), with reduced KeM indicating CYP2C8 and CYP3A4 as primary elimination pathways. In the dual-inhibitor scenario, Cer-L AUCR reached ~70—far exceeding that of the parent drug—suggesting severe clearance impairment and toxic accumulation. Conclusions: Dual inhibition of CYP2C8 and CYP3A4 may cause dangerously elevated Cer-L levels, contributing to Cer-associated rhabdomyolysis. This modeling approach offers a powerful framework for evaluating DDI risks involving active or toxic metabolites, supporting safer drug development and regulatory assessment. Full article
(This article belongs to the Section Marketed Drugs)
Show Figures

Graphical abstract

24 pages, 2389 KiB  
Article
A Multi-Objective Optimization Framework for Robust and Accurate Photovoltaic Model Parameter Identification Using a Novel Parameterless Algorithm
by Mohammed Alruwaili
Processes 2025, 13(7), 2111; https://doi.org/10.3390/pr13072111 - 3 Jul 2025
Viewed by 360
Abstract
Photovoltaic (PV) models are hard to optimize due to their intrinsic complexity and changing operation conditions. Root mean square error (RMSE) is often given precedence in classic single-objective optimization methods, limiting them to address the intricate nature of PV model calibration. To bypass [...] Read more.
Photovoltaic (PV) models are hard to optimize due to their intrinsic complexity and changing operation conditions. Root mean square error (RMSE) is often given precedence in classic single-objective optimization methods, limiting them to address the intricate nature of PV model calibration. To bypass these limitations, this research proposes a novel multi-objective optimization framework balancing accuracy and robustness by considering both maximum error and the L2 norm as significant objective functions. Along with that, we introduce the Random Search Around Bests (RSAB) algorithm, which is a parameterless metaheuristic designed to be effective at exploring the solution space. The primary contributions of this work are as follows: (1) an extensive performance evaluation of the proposed framework; (2) an adaptable function to adjust dynamically the trade-off between robustness and error minimization; and (3) the elimination of manual tuning of the RSAB parameters. Rigorous testing across three PV models demonstrates RSAB’s superiority over 17 state-of-the-art algorithms. By overcoming significant issues such as premature convergence and local minima entrapment, the proposed procedure provides practitioners with a reliable tool to optimize PV systems. Hence, this research supports the overarching goals of sustainable energy technology advancements by offering an organized and flexible solution enhancing the accuracy and efficiency of PV modeling, furthering research in renewable energy. Full article
Show Figures

Figure 1

26 pages, 2124 KiB  
Article
Integrating Boruta, LASSO, and SHAP for Clinically Interpretable Glioma Classification Using Machine Learning
by Mohammad Najeh Samara and Kimberly D. Harry
BioMedInformatics 2025, 5(3), 34; https://doi.org/10.3390/biomedinformatics5030034 - 30 Jun 2025
Viewed by 741
Abstract
Background: Gliomas represent the most prevalent and aggressive primary brain tumors, requiring precise classification to guide treatment strategies and improve patient outcomes. Purpose: This study aimed to develop and evaluate a machine learning-driven approach for glioma classification by identifying the most relevant genetic [...] Read more.
Background: Gliomas represent the most prevalent and aggressive primary brain tumors, requiring precise classification to guide treatment strategies and improve patient outcomes. Purpose: This study aimed to develop and evaluate a machine learning-driven approach for glioma classification by identifying the most relevant genetic and clinical biomarkers while demonstrating clinical utility. Methods: A dataset from The Cancer Genome Atlas (TCGA) containing 23 features was analyzed using an integrative approach combining Boruta, Least Absolute Shrinkage and Selection Operator (LASSO), and SHapley Additive exPlanations (SHAP) for feature selection. The refined feature set was used to train four machine learning models: Random Forest, Support Vector Machine, XGBoost, and Logistic Regression. Comprehensive evaluation included class distribution analysis, calibration assessment, and decision curve analysis. Results: The feature selection approach identified 13 key predictors, including IDH1, TP53, ATRX, PTEN, NF1, EGFR, NOTCH1, PIK3R1, MUC16, CIC mutations, along with Age at Diagnosis and race. XGBoost achieved the highest AUC (0.93), while Logistic Regression recorded the highest testing accuracy (88.09%). Class distribution analysis revealed excellent GBM detection (Average Precision 0.840–0.880) with minimal false negatives (5–7 cases). Calibration analysis demonstrated reliable probability estimates (Brier scores 0.103–0.124), and decision curve analysis confirmed substantial clinical utility with net benefit values of 0.36–0.39 across clinically relevant thresholds. Conclusions: The integration of feature selection techniques with machine learning models enhances diagnostic precision, interpretability, and clinical utility in glioma classification, providing a clinically ready framework that bridges computational predictions with evidence-based medical decision-making. Full article
Show Figures

Figure 1

14 pages, 1520 KiB  
Article
Thermomechanical Parameters Modelling of Spring Force Elements Made of Shape Memory Alloys
by Olga Łastowska, Vitaliy Polishchuk and Andrii Poznanskyi
Materials 2025, 18(13), 3055; https://doi.org/10.3390/ma18133055 - 27 Jun 2025
Viewed by 344
Abstract
This study presents a phenomenological model for predicting the thermomechanical behaviour of spring-type actuators made of shape memory alloys (SMAs). The model incorporates the kinetics of martensite–austenite phase transitions as a function of temperature and applied stress. The primary innovation is the inclusion [...] Read more.
This study presents a phenomenological model for predicting the thermomechanical behaviour of spring-type actuators made of shape memory alloys (SMAs). The model incorporates the kinetics of martensite–austenite phase transitions as a function of temperature and applied stress. The primary innovation is the inclusion of a scalar internal variable that represents the evolution of the phase transformation within a phenomenological macroscopic model. This approach enables the deformation–force–temperature behaviour of SMA-based spring elements under cyclic loading to be accurately described. A set of constitutive equations was derived to describe reversible and residual strains, along with transformation start and finish conditions. Model parameters were calibrated using experimental data from VSP-1 and TN-1K SMA springs that were subjected to thermal cycling. The validation results show a high correlation between the theoretical predictions and the experimental data, with deviation margins of less than 6.5%. The model was then applied to designing and analysing thermosensitive actuator mechanisms for temperature control systems. This yielded accurate deformation–force characteristics, demonstrating low inertia and high repeatability. This approach enables the efficient prediction and improvement of the performance of SMA-based spring elements in actuators, making it relevant for adaptive systems in marine and aerospace applications. Full article
Show Figures

Figure 1

12 pages, 510 KiB  
Article
Development and Validation of a Score-Based Model for Estimating Esophageal Squamous Cell Carcinoma and Precancerous Lesions Risk in an Opportunistic Screening Population
by Yan Bian, Ye Gao, Huishan Jiang, Qiuxin Li, Yuling Wang, Yanrong Zhang, Zhaoshen Li, Jinfang Xu and Luowei Wang
Cancers 2025, 17(13), 2138; https://doi.org/10.3390/cancers17132138 - 25 Jun 2025
Viewed by 366
Abstract
Background: Opportunistic screening is one major screening approach for esophageal squamous cell carcinoma (ESCC). We aimed to develop a score-based risk stratification model to assess the risk of ESCC and precancerous lesions in opportunistic screening and to validate it in an external population. [...] Read more.
Background: Opportunistic screening is one major screening approach for esophageal squamous cell carcinoma (ESCC). We aimed to develop a score-based risk stratification model to assess the risk of ESCC and precancerous lesions in opportunistic screening and to validate it in an external population. Methods: The study was a secondary analysis of a published esophageal cancer screening trial. The trial was conducted in 39 secondary or tertiary hospitals in China, with 14,597 individuals including 71 high-grade intraepithelial neoplasia (HGIN) and 182 ESCC, enrolled for opportunistic screening. Additionally, questionnaires and endoscopy were performed. The primary outcome was histology-confirmed high-grade esophageal lesions, including HGIN and ESCC. The predictors were selected using univariable and multivariable logistic regression. Model performance was primarily measured with the area under the receiver operating characteristic curve (AUROC). Results: The score-based prediction model contained 8 variables on a 21-point scale. The model demonstrated an AUROC of 0.833 (95% CI, 0.803–0.862) and 0.828 (95% CI, 0.793–0.864) for detecting high-grade lesions in the training and validation cohorts, respectively. Using the cut-off score determined in the training cohort (≥9), the sensitivity reached 70.0% (95% CI, 50.6–85.3%), 81.3% (95% CI, 63.6–92.8%), and 81.1% (95% CI, 64.9–92.0%) in the validation cohort for detecting HGIN, early ESCC, and advanced ESCC, respectively, at a specificity of 76.4% (95%CI, 75.4–77.4%). The score-based model exhibited satisfactory calibration in the calibration plots. The model could result in 75.6% fewer individuals subjected to endoscopy. Conclusions: This score-based model demonstrated superior discrimination for esophageal high-grade lesions. It has the potential to inform referral decisions in an opportunistic screening setting. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

Back to TopTop