Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,744)

Search Parameters:
Keywords = measurement error model approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 9069 KiB  
Article
Prediction of Temperature Distribution with Deep Learning Approaches for SM1 Flame Configuration
by Gökhan Deveci, Özgün Yücel and Ali Bahadır Olcay
Energies 2025, 18(14), 3783; https://doi.org/10.3390/en18143783 (registering DOI) - 17 Jul 2025
Abstract
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST [...] Read more.
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST k-ω turbulence model. The first approach employs a fully connected dense neural network to directly map scalar input parameters—fuel velocity, swirl ratio, and equivalence ratio—to high-resolution temperature contour images. In addition, a comparison was made with different deep learning networks, namely Res-Net, EfficientNetB0, and Inception Net V3, to better understand the performance of the model. In the first approach, the results of the Inception V3 model and the developed Dense Model were found to be better than Res-Net and Efficient Net. At the same time, file sizes and usability were examined. The second framework employs a U-Net-based convolutional neural network enhanced by an RGB Fusion preprocessing technique, which integrates multiple scalar fields from non-reacting (cold flow) conditions into composite images, significantly improving spatial feature extraction. The training and validation processes for both models were conducted using 80% of the CFD data for training and 20% for testing, which helped assess their ability to generalize new input conditions. In the secondary approach, similar to the first approach, studies were conducted with different deep learning models, namely Res-Net, Efficient Net, and Inception Net, to evaluate model performance. The U-Net model, which is well developed, stands out with its low error and small file size. The dense network is appropriate for direct parametric analyses, while the image-based U-Net model provides a rapid and scalable option to utilize the cold flow CFD images. This framework can be further refined in future research to estimate more flow factors and tested against experimental measurements for enhanced applicability. Full article
Show Figures

Figure 1

20 pages, 4335 KiB  
Article
Multi-Scale Transient Thermo-Mechanical Coupling Analysis Method for the SiCf/SiC Composite Guide Vane
by Min Li, Xue Chen, Yu Deng, Wenjun Wang, Jian Li, Evance Obara, Zhilin Han and Chuyang Luo
Materials 2025, 18(14), 3348; https://doi.org/10.3390/ma18143348 (registering DOI) - 17 Jul 2025
Abstract
In composites, fiber–matrix thermal mismatch induces stress heterogeneity that is beyond the resolution of macroscopic approaches. The asymptotic expansion homogenization method is used to create a multi-scale thermo-mechanical coupling model that predicts the elastic modulus, thermal expansion coefficients, and thermal conductivity of ceramic [...] Read more.
In composites, fiber–matrix thermal mismatch induces stress heterogeneity that is beyond the resolution of macroscopic approaches. The asymptotic expansion homogenization method is used to create a multi-scale thermo-mechanical coupling model that predicts the elastic modulus, thermal expansion coefficients, and thermal conductivity of ceramic matrix composites at both the macro- and micro-scales. These predictions are verified to be accurate with a maximum relative error of 9.7% between the measured and predicted values. The multi-scale analysis method is then used to guide the vane’s thermal stress analysis, and a macro–meso–micro multi-scale model is created. The thermal stress distribution and stress magnitudes of the guide vane under a transient high-temperature load are investigated. The results indicate that the temperature and thermal stress distributions of the guide vane under the homogenization and lamination theory models are rather comparable, and the locations of the maximum thermal stress are predicted to be reasonably close to one another. The homogenization model allows for the rapid and accurate prediction of the guide vane’s thermal stress distribution. When compared to the macro-scale stress values, the meso-scale predicted stress levels exhibit excellent accuracy, with an inaccuracy of 11.7%. Micro-scale studies reveal significant stress concentrations at the fiber–matrix interface, which is essential for the macro-scale fatigue and fracture behavior of the guide vane. Full article
(This article belongs to the Section Advanced Composites)
Show Figures

Figure 1

23 pages, 963 KiB  
Article
A Methodology for Turbine-Level Possible Power Prediction and Uncertainty Estimations Using Farm-Wide Autoregressive Information on High-Frequency Data
by Francisco Javier Jara Ávila, Timothy Verstraeten, Pieter Jan Daems, Ann Nowé and Jan Helsen
Energies 2025, 18(14), 3764; https://doi.org/10.3390/en18143764 - 16 Jul 2025
Abstract
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose [...] Read more.
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose a probabilistic methodology for turbine-level active power prediction and uncertainty estimation using high-frequency SCADA data and farm-wide autoregressive information. The method leverages a Stochastic Variational Gaussian Process with a Linear Model of Coregionalization, incorporating physical models like manufacturer power curves as mean functions and enabling flexible modeling of active power and its associated variance. The approach was validated on a wind farm in the Belgian North Sea comprising over 40 turbines, using only 15 days of data for training. The results demonstrate that the proposed method improves predictive accuracy over the manufacturer’s power curve, achieving a reduction in error measurements of around 1%. Improvements of around 5% were seen in dominant wind directions (200°–300°) using 2 and 3 Latent GPs, with similar improvements observed on the test set. The model also successfully reconstructs wake effects, with Energy Ratio estimates closely matching SCADA-derived values, and provides meaningful uncertainty estimates and posterior turbine correlations. These results demonstrate that the methodology enables interpretable, data-efficient, and uncertainty-aware turbine-level power predictions, suitable for advanced wind farm monitoring and control applications, enabling a more sensitive underperformance detection. Full article
Show Figures

Figure 1

15 pages, 2473 KiB  
Article
Self-Calibrating TSEP for Junction Temperature and RUL Prediction in GaN HEMTs
by Yifan Cui, Yutian Gan, Kangyao Wen, Yang Jiang, Chunzhang Chen, Qing Wang and Hongyu Yu
Nanomaterials 2025, 15(14), 1102; https://doi.org/10.3390/nano15141102 - 16 Jul 2025
Abstract
Gallium nitride high-electron-mobility transistors (GaN HEMTs) are critical for high-power applications like AI power supplies and robotics but face reliability challenges due to increased dynamic ON-resistance (RDS_ON) from electrical and thermomechanical stresses. This paper presents a novel self-calibrating temperature-sensitive electrical parameter [...] Read more.
Gallium nitride high-electron-mobility transistors (GaN HEMTs) are critical for high-power applications like AI power supplies and robotics but face reliability challenges due to increased dynamic ON-resistance (RDS_ON) from electrical and thermomechanical stresses. This paper presents a novel self-calibrating temperature-sensitive electrical parameter (TSEP) model that uses gate leakage current (IG) to estimate junction temperature with high accuracy, uniquely addressing aging effects overlooked in prior studies. By integrating IG, aging-induced degradation, and failure-in-time (FIT) models, the approach achieves a junction temperature estimation error of less than 1%. Long-term hard-switching tests confirm its effectiveness, with calibrated RDS_ON measurements enabling precise remaining useful life (RUL) predictions. This methodology significantly improves GaN HEMT reliability assessment, enhancing their performance in resilient power electronics systems. Full article
(This article belongs to the Section Nanoelectronics, Nanosensors and Devices)
Show Figures

Figure 1

18 pages, 1438 KiB  
Article
Maximum Entropy Estimates of Hubble Constant from Planck Measurements
by David P. Knobles and Mark F. Westling
Entropy 2025, 27(7), 760; https://doi.org/10.3390/e27070760 - 16 Jul 2025
Abstract
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter [...] Read more.
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter space. The parameter space included the spectral tilt and amplitude of adiabatic density fluctuations of the early universe and the present-day ratios of dark energy, matter, and baryonic matter density. A statistical temperature was estimated by applying the equipartition theorem, which uniquely specifies a posterior probability distribution. The ME analysis inferred the mean value of the Hubble constant to be about 67 km/sec/Mpc with a conservative standard deviation of approximately 4.4 km/sec/Mpc. Unlike standard Bayesian analyses that incorporate specific noise models, the ME approach treats the model error generically, thereby producing broader, but less assumption-dependent, uncertainty bounds. The inferred ME value lies within 1σ of both early-universe estimates (Planck, Dark Energy Signal Instrument (DESI)) and late-universe measurements (e.g., the Chicago Carnegie Hubble Program (CCHP)) using redshift data collected from the James Webb Space Telescope (JWST). Thus, the ME analysis does not appear to support the existence of the Hubble tension. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

37 pages, 6001 KiB  
Article
Deep Learning-Based Crack Detection on Cultural Heritage Surfaces
by Wei-Che Huang, Yi-Shan Luo, Wen-Cheng Liu and Hong-Ming Liu
Appl. Sci. 2025, 15(14), 7898; https://doi.org/10.3390/app15147898 - 15 Jul 2025
Viewed by 58
Abstract
This study employs a deep learning-based object detection model, GoogleNet, to identify cracks in cultural heritage images. Subsequently, a semantic segmentation model, SegNet, is utilized to determine the location and extent of the cracks. To establish a scale ratio between image pixels and [...] Read more.
This study employs a deep learning-based object detection model, GoogleNet, to identify cracks in cultural heritage images. Subsequently, a semantic segmentation model, SegNet, is utilized to determine the location and extent of the cracks. To establish a scale ratio between image pixels and real-world dimensions, a parallel laser-based measurement approach is applied, enabling precise crack length calculations. The results indicate that the percentage error between crack lengths estimated using deep learning and those measured with a caliper is approximately 3%, demonstrating the feasibility and reliability of the proposed method. Additionally, the study examines the impact of iteration count, image quantity, and image category on the performance of GoogleNet and SegNet. While increasing the number of iterations significantly improves the models’ learning performance in the early stages, excessive iterations lead to overfitting. The optimal performance for GoogleNet was achieved at 75 iterations, whereas SegNet reached its best performance after 45,000 iterations. Similarly, while expanding the training dataset enhances model generalization, an excessive number of images may also contribute to overfitting. GoogleNet exhibited optimal performance with a training set of 66 images, while SegNet achieved the best segmentation accuracy when trained with 300 images. Furthermore, the study investigates the effect of different crack image categories by classifying datasets into four groups: general cracks, plain wall cracks, mottled wall cracks, and brick wall cracks. The findings reveal that training GoogleNet and SegNet with general crack images yielded the highest model performance, whereas training with a single crack category substantially reduced generalization capability. Full article
Show Figures

Figure 1

24 pages, 2011 KiB  
Article
Pharmacokinetics of Pegaspargase with a Limited Sampling Strategy for Asparaginase Activity Monitoring in Children with Acute Lymphoblastic Leukemia
by Cristina Matteo, Antonella Colombini, Marta Cancelliere, Tommaso Ceruti, Ilaria Fuso Nerini, Luca Porcu, Massimo Zucchetti, Daniela Silvestri, Maria Grazia Valsecchi, Rosanna Parasole, Luciana Vinti, Nicoletta Bertorello, Daniela Onofrillo, Massimo Provenzi, Elena Chiocca, Luca Lo Nigro, Laura Rachele Bettini, Giacomo Gotti, Silvia Bungaro, Martin Schrappe, Paolo Ubezio and Carmelo Rizzariadd Show full author list remove Hide full author list
Pharmaceutics 2025, 17(7), 915; https://doi.org/10.3390/pharmaceutics17070915 (registering DOI) - 15 Jul 2025
Viewed by 131
Abstract
Background: Asparaginase (ASPase) plays an important role in the therapy of acute lymphoblastic leukemia (ALL). Serum ASPase activity (SAA) can be modified and even abolished by host immune responses; therefore, current treatment guidelines recommend to monitor SAA during treatment administration. The SAA [...] Read more.
Background: Asparaginase (ASPase) plays an important role in the therapy of acute lymphoblastic leukemia (ALL). Serum ASPase activity (SAA) can be modified and even abolished by host immune responses; therefore, current treatment guidelines recommend to monitor SAA during treatment administration. The SAA monitoring schedule needs to be carefully planned to reduce the number of samples without hampering the possibility of measuring pharmacokinetics (PK) parameters in individual patients. Complex modelling approaches, not easily applicable in common practice, have been applied in previous studies to estimate ASPase PK parameters. This study aimed to estimate PK parameters by using a simplified approach suitable for real-world settings with limited sampling. Methods: Our study was based on 434 patients treated in Italy within the AIEOP-BFM ALL 2009 trial. During the induction phase, patients received two doses of pegylated ASPase and were monitored with blood sampling at five time points, including time 0. PK parameters were estimated by using the individually available SAA measurements with simple modifications of the classical non-compartmental PK analysis. We also took the opportunity to develop and validate a series of limited sampling models to predict ASPase exposure. Results: During the induction phase, average ASPase activity at day 7 was 1380 IU/L after the first dose and 1948 IU/L after the second dose; therapeutic SAA levels (>100 IU/L) were maintained until day 33 in 90.1% of patients. The average AUC and clearance were 46,937 IU/L × day and 0.114 L/day/m2, respectively. The database was analyzed for possible associations of PK parameters with biological characteristics of the patients, finding only a limited dependence on sex, age and risk score; however, these differences were not sufficient to allow any dose or schedule adjustments. Thereafter the possibility of further sampling reduction by using simple linear models to estimate the AUC was also explored. The most simple model required only two samplings 7 days after each ASPase dose, with the AUC being proportional to the sum of the two measured activities A(7) and A(21), calculated by the formula AUC = 14.1 × [A(7) + A(21)]. This model predicts the AUC with 6% average error and 35% maximum error compared to the AUC estimated with all available measures. Conclusions: Our study demonstrates the feasibility of a direct estimation of PK parameters in a real-life situation with limited and variable blood sampling schedules and also offers a simplified method and formulae easily applicable in clinical practice while maintaining a reliable pharmacokinetic monitoring. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics)
Show Figures

Figure 1

18 pages, 3006 KiB  
Article
Non-Linear Regression with Repeated Data—A New Approach to Bark Thickness Modelling
by Krzysztof Ukalski and Szymon Bijak
Forests 2025, 16(7), 1160; https://doi.org/10.3390/f16071160 - 14 Jul 2025
Viewed by 79
Abstract
Broader use of multioperational machines in forestry requires efficient methods for determining various timber parameters. Here, we present a novel approach to model the bark thickness (BT) as a function of stem diameter. Stem diameter (D) is any diameter measured along the bole, [...] Read more.
Broader use of multioperational machines in forestry requires efficient methods for determining various timber parameters. Here, we present a novel approach to model the bark thickness (BT) as a function of stem diameter. Stem diameter (D) is any diameter measured along the bole, not a specific one. The following four regression models were tested: marginal model (MM; reference), classical nonlinear regression with independent residuals (M1), nonlinear regression with residuals correlated within a single tree (M2), and nonlinear regression with the correlation of residuals and random components, taking into account random changes between the trees (M3). Empirical data consisted of larch (Larix sp. Mill.) BT measurements carried out at two sites in northern Poland. Relative root square mean error (RMSE%) and adjusted R-squared (R2adj) served to compare the fitted models. Model fit was tested for each tree separately, and all trees were combined. Of the analysed models, M3 turned out to be the best fit for both the individual tree and all tree levels. The fit of the regression function M3 for SITE1 (50-year-old, pure stand located in northern Poland) was 87.44% (R2adj), and for SITE2 (63-year-old, pure stand situated in the north of Poland) it was 80.6%. Taking into account the values of RMSE%, at the individual tree level the M3 model fit at location SITE1 was closest to the MM, while at SITE2 it was better than the MM. For the most comprehensive regression model, M3, it was checked how the error of the bark thickness estimate varied with stem diameter at different heights (from the base of the trees to the top). In general, the model’s accuracy increased with greater tree height. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 9506 KiB  
Article
A Stability Model for Sea Cliffs Considering the Coupled Effects of Sea Erosion and Rainfall
by Haoyu Zhao, Xu Chang, Yingbin Huang, Junlong Zhou and Zilong Ti
Oceans 2025, 6(3), 45; https://doi.org/10.3390/oceans6030045 - 14 Jul 2025
Viewed by 126
Abstract
This study proposed a sea cliff stability model that accounted for the coupled effects of sea erosion and rainfall, offering an improved quantitative assessment of the toppling risk. The approach integrated the notch morphology (height and depth) and rainfall infiltration to quantify stability, [...] Read more.
This study proposed a sea cliff stability model that accounted for the coupled effects of sea erosion and rainfall, offering an improved quantitative assessment of the toppling risk. The approach integrated the notch morphology (height and depth) and rainfall infiltration to quantify stability, validated by field data from six toppling sites near Da’ao Bay, where the maximum erosion distance error between model predictions and measurements ranged from 0.81% to 48.8% (with <20% error for Sites S2, S3, and S4). The results indicated that the notch morphology and rainfall exerted significant impacts on the sea cliff stability. Site S4 (the highest site) corresponded to a 17.5% decrease in K per 0.1 m notch depth increment. The rainfall infiltration reduced the maximum stable notch depth, decreasing by 8.86–21.92% during prolonged rainfall. This model can predict sea cliff stability and calculate the critical notch depth (e.g., 0.56–1.22 m for the study sites), providing a quantitative framework for coastal engineering applications and disaster mitigation strategies under climate change scenarios. Full article
Show Figures

Figure 1

20 pages, 23222 KiB  
Article
A Multi-View Three-Dimensional Scanning Method for a Dual-Arm Hand–Eye System with Global Calibration of Coded Marker Points
by Tenglong Zheng, Xiaoying Feng, Siyuan Wang, Haozhen Huang and Shoupeng Li
Micromachines 2025, 16(7), 809; https://doi.org/10.3390/mi16070809 - 13 Jul 2025
Viewed by 239
Abstract
To achieve robust and accurate collaborative 3D measurement under complex noise conditions, a global calibration method for dual-arm hand–eye systems and multi-view 3D imaging is proposed. A multi-view 3D scanning approach based on ICP (M3DHE-ICP) integrates a multi-frequency heterodyne coding phase solution with [...] Read more.
To achieve robust and accurate collaborative 3D measurement under complex noise conditions, a global calibration method for dual-arm hand–eye systems and multi-view 3D imaging is proposed. A multi-view 3D scanning approach based on ICP (M3DHE-ICP) integrates a multi-frequency heterodyne coding phase solution with ICP optimization, effectively correcting stitching errors caused by robotic arm attitude drift. After correction, the average 3D imaging error is 0.082 mm, reduced by 0.330 mm. A global calibration method based on encoded marker points (GCM-DHE) is also introduced. By leveraging spatial geometry constraints and a dynamic tracking model of marker points, the transformation between multi-coordinate systems of the dual arms is robustly solved. This reduces the average imaging error to 0.100 mm, 0.456 mm lower than that of traditional circular calibration plate methods. In actual engineering measurements, the average error for scanning a vehicle’s front mudguard is 0.085 mm, with a standard deviation of 0.018 mm. These methods demonstrate significant value for intelligent manufacturing and multi-robot collaborative measurement. Full article
Show Figures

Figure 1

20 pages, 1753 KiB  
Article
Hybrid Cloud-Based Information and Control System Using LSTM-DNN Neural Networks for Optimization of Metallurgical Production
by Kuldashbay Avazov, Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov, Akmalbek Abdusalomov and Young Im Cho
Processes 2025, 13(7), 2237; https://doi.org/10.3390/pr13072237 - 13 Jul 2025
Viewed by 391
Abstract
A methodology for detecting systematic errors in sets of equally accurate, uncorrelated, aggregate measurements is proposed and applied within the automatic real-time dispatch control system of a copper concentrator plant (CCP) to refine the technical and economic performance indicators (EPIs) computed by the [...] Read more.
A methodology for detecting systematic errors in sets of equally accurate, uncorrelated, aggregate measurements is proposed and applied within the automatic real-time dispatch control system of a copper concentrator plant (CCP) to refine the technical and economic performance indicators (EPIs) computed by the system. This work addresses and solves the problem of selecting and obtaining reliable measurement data by exploiting the redundant measurements of process streams together with the balance equations linking those streams. This study formulates an approach for integrating cloud technologies, machine learning methods, and forecasting into information control systems (ICSs) via predictive analytics to optimize CCP production processes. A method for combining the hybrid cloud infrastructure with an LSTM-DNN neural network model has been developed, yielding a marked improvement in TEP for copper concentration operations. The forecasting accuracy for the key process parameters rose from 75% to 95%. Predictive control reduced energy consumption by 10% through more efficient resource use, while the copper losses to tailings fell by 15–20% thanks to optimized reagent dosing and the stabilization of the flotation process. Equipment failure prediction cut the amount of unplanned downtime by 30%. As a result, the control system became adaptive, automatically correcting the parameters in real time and lessening the reliance on operator decisions. The architectural model of an ICS for metallurgical production based on the hybrid cloud and the LSTM-DNN model was devised to enhance forecasting accuracy and optimize the EPIs of the CCP. The proposed model was experimentally evaluated against alternative neural network architectures (DNN, GRU, Transformer, and Hybrid_NN_TD_AIST). The results demonstrated the superiority of the LSTM-DNN in forecasting accuracy (92.4%), noise robustness (0.89), and a minimal root-mean-square error (RMSE = 0.079). The model shows a strong capability to handle multidimensional, non-stationary time series and to perform adaptive measurement correction in real time. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

19 pages, 684 KiB  
Article
A Wi-Fi Fingerprinting Indoor Localization Framework Using Feature-Level Augmentation via Variational Graph Auto-Encoder
by Dongdeok Kim, Jae-Hyeon Park and Young-Joo Suh
Electronics 2025, 14(14), 2807; https://doi.org/10.3390/electronics14142807 - 12 Jul 2025
Viewed by 191
Abstract
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which [...] Read more.
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which can arise from complex indoor structures, device limitations, or user mobility, leading to incomplete and unreliable fingerprint data. To address this critical issue, we propose Feature-level Augmentation for Localization (FALoc), a novel framework that enhances Wi-Fi fingerprinting-based localization through targeted feature-level data augmentation. FALoc uniquely models the observation probabilities of RSSI signals by constructing a bipartite graph between reference points and access points, which is then processed by a variational graph auto-encoder (VGAE). Based on these learned probabilities, FALoc intelligently imputes likely missing RSSI values or removes unreliable ones, effectively enriching the training data. We evaluated FALoc using an MLP (Multi-Layer Perceptron)-based localization model on the UJIIndoorLoc and UTSIndoorLoc datasets. The experimental results demonstrate that FALoc significantly improves localization accuracy, achieving mean localization errors of 7.137 m on UJIIndoorLoc and 7.138 m on UTSIndoorLoc, which represent improvements of approximately 12.9% and 8.6% over the respective MLP baselines (8.191 m and 7.808 m), highlighting the efficacy of our approach in handling missing data. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

39 pages, 16838 KiB  
Article
Control of Nonlinear Systems Using Fuzzy Techniques Based on Incremental State Models of the Variable Type Employing the “Extremum Seeking” Optimizer
by Basil Mohammed Al-Hadithi and Gilberth André Loja Acuña
Appl. Sci. 2025, 15(14), 7791; https://doi.org/10.3390/app15147791 - 11 Jul 2025
Viewed by 114
Abstract
This work presents the design of a control algorithm based on an augmented incremental state-space model, emphasizing its compatibility with Takagi–Sugeno (T–S) fuzzy models for nonlinear systems. The methodology integrates key components such as incremental modeling, fuzzy system identification, discrete Linear Quadratic Regulator [...] Read more.
This work presents the design of a control algorithm based on an augmented incremental state-space model, emphasizing its compatibility with Takagi–Sugeno (T–S) fuzzy models for nonlinear systems. The methodology integrates key components such as incremental modeling, fuzzy system identification, discrete Linear Quadratic Regulator (LQR) design, and state observer implementation. To optimize controller performance, the Extremum Seeking Control (ESC) technique is employed for the automatic tuning of LQR gains, minimizing a predefined cost function. The control strategy is formulated within a generalized framework that evolves from conventional discrete fuzzy models to a higher-order incremental-N state-space representation. The simulation results on a nonlinear multivariable thermal mixing tank system validate the effectiveness of the proposed approach under reference tracking and various disturbance scenarios, including ramp, parabolic, and higher-order polynomial signals. The main contribution of this work is that the proposed scheme achieves zero steady-state error for reference inputs and disturbances up to order N−1 by employing the incremental-N formulation. Furthermore, the system exhibits robustness against input and load disturbances, as well as measurement noise. Remarkably, the ESC algorithm maintains its effectiveness even when noise is present in the system output. Additionally, the proposed incremental-N model is applicable to fast dynamic systems, provided that the system dynamics are accurately identified and the model is discretized using a suitable sampling rate. This makes the approach particularly relevant for control applications in electrical systems, where handling high-order reference signals and disturbances is critical. The incremental formulation, thus, offers a practical and effective framework for achieving high-performance control in both slow and fast nonlinear multivariable processes. Full article
Show Figures

Figure 1

21 pages, 1682 KiB  
Article
Dynamic Multi-Path Airflow Analysis and Dispersion Coefficient Correction for Enhanced Air Leakage Detection in Complex Mine Ventilation Systems
by Yadong Wang, Shuliang Jia, Mingze Guo, Yan Zhang and Yongjun Wang
Processes 2025, 13(7), 2214; https://doi.org/10.3390/pr13072214 - 10 Jul 2025
Viewed by 305
Abstract
Mine ventilation systems are critical for ensuring operational safety, yet air leakage remains a pervasive challenge, leading to energy inefficiency and heightened safety risks. Traditional tracer gas methods, while effective in simple networks, exhibit significant errors in complex multi-entry systems due to static [...] Read more.
Mine ventilation systems are critical for ensuring operational safety, yet air leakage remains a pervasive challenge, leading to energy inefficiency and heightened safety risks. Traditional tracer gas methods, while effective in simple networks, exhibit significant errors in complex multi-entry systems due to static empirical parameters and environmental interference. This study proposes an integrated methodology that combines multi-path airflow analysis with dynamic longitudinal dispersion coefficient correction to enhance the accuracy of air leakage detection. Utilizing sulfur hexafluoride (SF6) as the tracer gas, a phased release protocol with temporal isolation was implemented across five strategic points in a coal mine ventilation network. High-precision detectors (Bruel & Kiaer 1302) and the MIVENA system enabled synchronized data acquisition and 3D network modeling. Theoretical models were dynamically calibrated using field-measured airflow velocities and dispersion coefficients. The results revealed three deviation patterns between simulated and measured tracer peaks: Class A deviation showed 98.5% alignment in single-path scenarios, Class B deviation highlighted localized velocity anomalies from Venturi effects, and Class C deviation identified recirculation vortices due to abrupt cross-sectional changes. Simulation accuracy improved from 70% to over 95% after introducing wind speed and dispersion adjustment coefficients, resolving concealed leakage pathways between critical nodes and key nodes. The study demonstrates that the dynamic correction of dispersion coefficients and multi-path decomposition effectively mitigates errors caused by turbulence and geometric irregularities. This approach provides a robust framework for optimizing ventilation systems, reducing invalid airflow losses, and advancing intelligent ventilation management through real-time monitoring integration. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

12 pages, 1253 KiB  
Article
The Feasibility of a Music Therapy Respiratory Telehealth Protocol on Long COVID Respiratory Symptoms
by Jingwen Zhang, Joanne V. Loewy, Lisa Spielman, Zijian Chen and Jonathan M. Raskin
COVID 2025, 5(7), 107; https://doi.org/10.3390/covid5070107 - 10 Jul 2025
Viewed by 873
Abstract
Objective: This study aims to investigate the feasibility of an online music therapy protocol for individuals previously diagnosed with COVID-19, focusing on their perceptions of their respiratory symptoms and the intervention’s impact on psychosocial measures. Methods: A within-subject experimental design was applied to [...] Read more.
Objective: This study aims to investigate the feasibility of an online music therapy protocol for individuals previously diagnosed with COVID-19, focusing on their perceptions of their respiratory symptoms and the intervention’s impact on psychosocial measures. Methods: A within-subject experimental design was applied to examine an eight-week weekly online music therapy protocol, including singing, wind instrument playing, and music visualizations. All self-report data were collected bi-weekly throughout the 16-weeks study period, including baseline and post-tests. The measures for respiratory symptoms included the Medical Research Council’s Dyspnea Scale (MRC Dyspnea), Chronic Respiratory Questionnaire-Mastery Scores (CRQ Mastery), and Visual Analogue Scale for breathlessness. The measures for the secondary psychosocial outcomes were the Beck Depression Inventory-Short Form, the Generalized Anxiety Disorder 7-item, the Hospital Anxiety and Depression Scale, the Fatigue Severity Scale, the Epworth Sleepiness Scale, the EuroQol 5-Dimension 5-Level, and the Connor-Davidson Resilience Scale. Results: Twenty-four participants were enrolled. The participants perceived a reduction in respiratory symptoms, and shortness of breath (MRC Dyspnea). Planned comparisons showed significant decreases in MRC from baseline to post-treatment (p = 0.008). The mixed-effects model, including pre-baseline and post-treatment, was significant (p < 0.001). Significant changes in Breathing VAS were consistent with improvements in MRC Dyspnea, showing a significant baseline-to-post difference (p = 0.01). The CRQ Mastery showed significant improvements from baseline to Week 12 (p < 0.001). No significant changes were observed in other secondary measures. Conclusions: Our preliminary findings suggest that this protocol is feasible, and as a result, may help individuals previously diagnosed with COVID-19 to cope with lasting respiratory symptoms and improve their perception of shortness of breath. Live music-making, including playing accessible wind instruments and singing, may contribute to an increase sense of control over breathing. As this was a feasibility study, we conducted multiple uncorrected statistical comparisons to explore potential effects. While this approach may increase the risk of Type I error, the findings are intended to inform hypotheses for future confirmatory studies rather than to draw definitive conclusions. Full article
(This article belongs to the Section Long COVID and Post-Acute Sequelae)
Show Figures

Figure 1

Back to TopTop