Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,756)

Search Parameters:
Keywords = real data applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6948 KB  
Article
Unveiling Surface Water Quality and Key Influencing Factors in China Using a Machine Learning Approach
by Yanli Li, Lei Liu, Lei Cheng and Yahui Shan
Sustainability 2025, 17(20), 9205; https://doi.org/10.3390/su17209205 (registering DOI) - 17 Oct 2025
Abstract
Surface water quality assessment is critical for environmental protection and public health management, yet traditional methods are often time-consuming and costly, limiting their application for real-time monitoring. Machine learning (ML) approaches offer promising alternatives for automated water quality assessment and understanding of key [...] Read more.
Surface water quality assessment is critical for environmental protection and public health management, yet traditional methods are often time-consuming and costly, limiting their application for real-time monitoring. Machine learning (ML) approaches offer promising alternatives for automated water quality assessment and understanding of key influencing factors. This study employed six ML algorithms to predict water quality grades using comprehensive data from China’s national surface water monitoring network. A dataset comprising 79,015 water quality measurements collected from 1 January to 14 February 2025 was processed with nine physicochemical parameters as input features. The XGBoost model demonstrated superior predictive performance with 99.04% accuracy. Feature importance analysis revealed that nutrient-related parameters (total phosphorus, permanganate index, ammonia nitrogen) consistently ranked as the most critical factors across all models. SHAP analysis provided interpretable explanations of model predictions, revealing grade-specific discrimination patterns where excellent quality waters are primarily distinguished by phosphorus limitation, while severely polluted waters require multi-parameter approaches. This study demonstrates the effectiveness of ML approaches for large-scale water quality assessment and provides a scientific foundation for optimizing monitoring strategies and environmental management decisions in China’s surface water systems. Full article
Show Figures

Figure 1

22 pages, 4171 KB  
Article
Enhanced Voltage Balancing Algorithm and Implementation of a Single-Phase Modular Multilevel Converter for Power Electronics Applications
by Valentine Obiora, Wenzhi Zhou, Wissam Jamal, Chitta Saha, Soroush Faramehr and Petar Igic
Machines 2025, 13(10), 955; https://doi.org/10.3390/machines13100955 (registering DOI) - 16 Oct 2025
Abstract
This paper presents an innovative primary control strategy for a modular multilevel converter aimed at enhancing reliability and dynamic performance for power electronics applications. The proposed method utilises interactive modelling tools, including MATLAB Simulink (2022b) for algorithm design and Typhoon HIL (2023.2) for [...] Read more.
This paper presents an innovative primary control strategy for a modular multilevel converter aimed at enhancing reliability and dynamic performance for power electronics applications. The proposed method utilises interactive modelling tools, including MATLAB Simulink (2022b) for algorithm design and Typhoon HIL (2023.2) for real-time validation. The circuit design and component analysis were carried out using Proteus Design Suite (v8.17) and LTSpice (v17) to optimise the hardware implementation. A power hardware-in-the-loop experimental test setup was built to demonstrate the robustness and adaptability of the control algorithm under fixed load conditions. The simulation results were compared and verified against the experimental data. Additionally, the proposed control strategy was successfully validated through experiments, demonstrating its effectiveness in simplifying control development through efficient co-simulation. Full article
(This article belongs to the Special Issue Power Converters: Topology, Control, Reliability, and Applications)
14 pages, 843 KB  
Article
A Scalarized Entropy-Based Model for Portfolio Optimization: Balancing Return, Risk and Diversification
by Florentin Șerban and Silvia Dedu
Mathematics 2025, 13(20), 3311; https://doi.org/10.3390/math13203311 - 16 Oct 2025
Abstract
Portfolio optimization is a cornerstone of modern financial decision-making, traditionally based on the mean–variance model introduced by Markowitz. However, this framework relies on restrictive assumptions—such as normally distributed returns and symmetric risk preferences—that often fail in real-world markets, particularly in volatile and non-Gaussian [...] Read more.
Portfolio optimization is a cornerstone of modern financial decision-making, traditionally based on the mean–variance model introduced by Markowitz. However, this framework relies on restrictive assumptions—such as normally distributed returns and symmetric risk preferences—that often fail in real-world markets, particularly in volatile and non-Gaussian environments such as cryptocurrencies. To address these limitations, this paper proposes a novel multi-objective model that combines expected return maximization, mean absolute deviation (MAD) minimization, and entropy-based diversification into a unified optimization structure: the Mean–Deviation–Entropy (MDE) model. The MAD metric offers a robust alternative to variance by capturing the average magnitude of deviations from the mean without inflating extreme values, while entropy serves as an information-theoretic proxy for portfolio diversification and uncertainty. Three entropy formulations are considered—Shannon entropy, Tsallis entropy, and cumulative residual Sharma–Taneja–Mittal entropy (CR-STME)—to explore different notions of uncertainty and structural diversity. The MDE model is formulated as a tri-objective optimization problem and solved via scalarization techniques, enabling flexible trade-offs between return, deviation, and entropy. The framework is empirically tested on a cryptocurrency portfolio composed of Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB), using daily data over a 12-month period. The empirical setting reflects a high-volatility, high-skewness regime, ideal for testing entropy-driven diversification. Comparative outcomes reveal that entropy-integrated models yield more robust weightings, particularly when tail risk and regime shifts are present. Comparative results against classical mean–variance and mean–MAD models indicate that the MDE model achieves improved diversification, enhanced allocation stability, and greater resilience to volatility clustering and tail risk. This study contributes to the literature on robust portfolio optimization by integrating entropy as a formal objective within a scalarized multi-criteria framework. The proposed approach offers promising applications in sustainable investing, algorithmic asset allocation, and decentralized finance, especially under high-uncertainty market conditions. Full article
(This article belongs to the Section E5: Financial Mathematics)
Show Figures

Figure 1

33 pages, 6175 KB  
Article
Fluorocarbon Interfacial Modifier: Wettability Alteration in Reservoir Rocks for Enhanced Oil Recovery and Field Application
by Ruiyang Liu, Huabin Li, Zhe Li, Xudong Yu, Lide He, Xutong Guo, Feng Zhao, Huaqiang Shi and Wenzhao Sun
Energies 2025, 18(20), 5463; https://doi.org/10.3390/en18205463 (registering DOI) - 16 Oct 2025
Abstract
The peripheral reservoirs of the Daqing Oilfield exhibit low permeability and partial heterogeneity, resulting in a rapid injection pressure increase, limited sweep efficiency, and significant residual oil retention. To enhance recovery, this study synthesized a fluorocarbon siloxane (FHB) via free radical addition for [...] Read more.
The peripheral reservoirs of the Daqing Oilfield exhibit low permeability and partial heterogeneity, resulting in a rapid injection pressure increase, limited sweep efficiency, and significant residual oil retention. To enhance recovery, this study synthesized a fluorocarbon siloxane (FHB) via free radical addition for rock surface wettability modification. At a concentration of 0.1 wt%, FHB increased water and oil contact angles to 136° and 117°, respectively, at 60 °C. Fourier transform infrared spectroscopy, thermogravimetric analysis, and aging tests confirmed stable hydrophobic/oleophobic properties through chemical bonding to the rock. Furthermore, the low surface energy FHB significantly reduced adhesion work and decreased oil-water interfacial tension from 27 mN/m to 0.55 mN/m, thereby improving fluid transport in pore throats and promoting residual oil mobilization. Core flooding experiments resulted in an increase in total recovery by 11%, with low-field NMR analysis confirming reduced oil saturation across various pore sizes. A field trial in a production well in Daqing Oilfield successfully increased output from 3.1 t/d to 4.9 t/d, validating the efficacy of this strategy under real reservoir conditions—representing the first successful field application of a fluorocarbon-based modifier for wettability alteration and oil production enhancement in China. This study provides valuable experimental data and a practical framework for implementing chemical-enhanced recovery. Full article
(This article belongs to the Section I1: Fuel)
Show Figures

Figure 1

17 pages, 1731 KB  
Article
Comparative Analysis of Statistical and AI-Based Methods for Livestock Monitoring in Extensive Systems
by Marco Bonfanti, Dominga Mancuso, Giulia Castagnolo and Simona Maria Carmela Porto
Appl. Sci. 2025, 15(20), 11116; https://doi.org/10.3390/app152011116 - 16 Oct 2025
Abstract
In recent years, the research focusing on extensive farming systems has attracted considerable interest among experts in the field. Environmental sustainability and animal welfare are emerging as key elements, assuming a crucial role in global agriculture. In this context, monitoring animals is important [...] Read more.
In recent years, the research focusing on extensive farming systems has attracted considerable interest among experts in the field. Environmental sustainability and animal welfare are emerging as key elements, assuming a crucial role in global agriculture. In this context, monitoring animals is important not only to ensure their welfare, but also to preserve the balance of the land. Inadequate grazing management can in fact damage vegetation due to soil erosion. Therefore, monitoring the habits of animals during grazing is a challenging and crucial task for livestock management. Internet of Things (IoT) technologies, which allow for remote and real-time monitoring, may be a valid solution to these challenges in extensive farms where farmer-to-animal contact is not usual. In this regard, this paper examined three different methods to classify the behavioral activities of grazing cows, by using data collected with collars equipped with accelerometers. Three distinct approaches were compared: the former based on statistical methods, and the other on the use of Machine and Deep Learning techniques. From the comparison of the results obtained, strengths and weaknesses of each approach were examined, so to determine the most appropriate choice in relation to the characteristics of extensive livestock systems. In detail, Machine and Deep Learning-based approaches were found to be more accurate but highly energy-intensive. Therefore, in rural environments, the approach based on statistical methods, combined with LPWAN applications, was preferable due to its long range and low energy consumption. Ultimately, the statistical approach was found to be 64% accurate in classifying four behavioral classes. Full article
20 pages, 6942 KB  
Article
Coherent Dynamic Clutter Suppression in Structural Health Monitoring via the Image Plane Technique
by Mattia Giovanni Polisano, Marco Manzoni, Stefano Tebaldini, Damiano Badini and Sergi Duque
Remote Sens. 2025, 17(20), 3459; https://doi.org/10.3390/rs17203459 - 16 Oct 2025
Abstract
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic [...] Read more.
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic clutter is considered to be a fast moving road user, such as car, truck, or moped. The proposed technique is suitable in case of a dynamic clutter, such that its Doppler contribute alias and falls over the 0 Hz component. In those cases, a standard low-pass filter is not a viable option. Indeed, an excessively shallow low-pass filter preserves the dynamic clutter contribution, while an excessively narrow low-pass filter deletes the displacement information and also preserves the dynamic clutter. The proposed approach leverages the Time Domain Backprojection (TDBP), a well-known technique to produce radar imagery, to transfer the dynamic clutter from the data domain to an image plane, where the dynamic clutter is maximally compressed. Consequently, the dynamic clutter can be more effectively suppressed than in the range-Doppler domain. The dynamic clutter cancellation is performed by coherent subtraction. Throughout this work, a numerical simulation is conducted. The simulation results show consistency with the ground truth. A further validation is performed using real-world data acquired in the C-band by Huawei Technologies. Corner reflectors are placed on an infrastructure, in particular a bridge, to perform the measurements. Here, two case studies are proposed: a bus and a truck. The validation shows consistency with the ground truth, providing a degree of improvement within respect to the corrupted displacement on the mean error and its variance. As a by-product of the algorithm, there is the capability to produce high-resolution imagery of moving targets. Full article
31 pages, 3812 KB  
Review
Generative Adversarial Networks in Dermatology: A Narrative Review of Current Applications, Challenges, and Future Perspectives
by Rosa Maria Izu-Belloso, Rafael Ibarrola-Altuna and Alex Rodriguez-Alonso
Bioengineering 2025, 12(10), 1113; https://doi.org/10.3390/bioengineering12101113 - 16 Oct 2025
Abstract
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores [...] Read more.
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores the landscape of GAN applications in dermatology, systematically analyzing 27 key studies and identifying 11 main clinical use cases. These range from the synthesis of under-represented skin phenotypes to segmentation, denoising, and super-resolution imaging. The review also examines the commercial implementations of GAN-based solutions relevant to practicing dermatologists. We present a comparative summary of GAN architectures, including DCGAN, cGAN, StyleGAN, CycleGAN, and advanced hybrids. We analyze technical metrics used to evaluate performance—such as Fréchet Inception Distance (FID), SSIM, Inception Score, and Dice Coefficient—and discuss challenges like data imbalance, overfitting, and the lack of clinical validation. Additionally, we review ethical concerns and regulatory limitations. Our findings highlight the transformative potential of GANs in dermatology while emphasizing the need for standardized protocols and rigorous validation. While early results are promising, few models have yet reached real-world clinical integration. The democratization of AI tools and open-access datasets are pivotal to ensure equitable dermatologic care across diverse populations. This review serves as a comprehensive resource for dermatologists, researchers, and developers interested in applying GANs in dermatological practice and research. Future directions include multimodal integration, clinical trials, and explainable GANs to facilitate adoption in daily clinical workflows. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

26 pages, 2009 KB  
Article
Tool Wear Prediction Using Machine-Learning Models for Bone Drilling in Robotic Surgery
by Shilpa Pusuluri, Hemanth Satya Veer Damineni and Poolan Vivekananda Shanmuganathan
Automation 2025, 6(4), 59; https://doi.org/10.3390/automation6040059 (registering DOI) - 16 Oct 2025
Abstract
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, [...] Read more.
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, we propose a machine-learning (ML)-based tool condition monitoring system based on multi-sensor data to preempt excessive tool wear during drilling in robotic surgery. Real-time data is acquired from the six-component force sensor of a collaborative arm along with the data from the temperature and multi-axis vibration sensor mounted on the bone specimen being drilled upon. Raw data from the sensors may have noises and outliers. Signal processing in the time- and frequency-domain are used for denoising as well as to obtain additional features to be derived from the raw sensory data. This paper addresses the challenging problem of identification of the most suitable ML algorithm and the most suitable features to be used as inputs to the algorithm. While dozens of features and innumerable machine learning and deep learning models are available, this paper addresses the problem of selecting the most relevant features, the most relevant AI models, and the optimal hyperparameters to be used in the AI model to provide accurate prediction on the tool condition. A unique framework is proposed for classifying tool wear that combines machine learning-based modeling with multi-sensor data. From the raw sensory data that contains only a handful of features, a number of additional features are derived using frequency-domain techniques and statistical measures. Using feature engineering, we arrived at a total of 60 features from time-domain, frequency-domain, and interaction-based metrics. Such additional features help in improving its predictive capabilities but make the training and prediction complicated and time-consuming. Using a sequence of techniques such as variance thresholding, correlation filtering, ANOVA F-test, and SHAP analysis, the number of features was reduced from 60 to the 4 features that will be most effective in real-time tool condition prediction. In contrast to previous studies that only examine a small number of machine learning models, our approach systematically evaluates a wide range of machine learning and deep learning architectures. The performances of 47 classical ML models and 6 deep learning (DL) architectures were analyzed using the set of the four features identified as most suitable. The Extra Trees Classifier (an ML model) and the one-dimensional Convolutional Neural Network (1D CNN) exhibited the best prediction accuracy among the models studied. Using real-time data, these models monitored the drilling tool condition in real-time to classify the tool wear into three categories of slight, moderate, and severe. Full article
Show Figures

Figure 1

22 pages, 8972 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 (registering DOI) - 16 Oct 2025
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
22 pages, 10515 KB  
Article
Experimental Investigations of the Melting/Solidification of Coconut Oil Using Ultrasound-Based and Image Processing Approaches
by Rafał Andrzejczyk, Radosław Drelich and Michał Pakuła
Energies 2025, 18(20), 5455; https://doi.org/10.3390/en18205455 (registering DOI) - 16 Oct 2025
Abstract
The present study aims to compare the feasibility of using ultrasound techniques and image processing to obtain comprehensive experimental results on the dynamics of solid–liquid fraction changes during the melting and solidification of coconut oil as a phase change material (PCM). The discussion [...] Read more.
The present study aims to compare the feasibility of using ultrasound techniques and image processing to obtain comprehensive experimental results on the dynamics of solid–liquid fraction changes during the melting and solidification of coconut oil as a phase change material (PCM). The discussion will focus on the advantages and limitations of various ultrasonic techniques and image data analysis for inspecting materials during phase transitions. Ultrasound enables the detection of phase changes in materials by analysing variations in their acoustic properties, such as wave velocity and amplitude, during transitions. This method is not only cost-effective compared to traditional non-destructive techniques, such as X-ray tomography, but also offers the potential for real-time monitoring in thermal energy storage systems. Furthermore, it can provide valuable information about internal mechanical parameters and the material’s structure. A detailed analysis of the melting and solidification dynamics has been conducted, confirming the feasibility of using ultrasound parameters to assess the reconstruction of material structures during phase changes. This study paves the way for more efficient and cost-effective monitoring of phase change materials in various applications. Full article
Show Figures

Figure 1

51 pages, 4751 KB  
Review
Large Language Models and 3D Vision for Intelligent Robotic Perception and Autonomy
by Vinit Mehta, Charu Sharma and Karthick Thiyagarajan
Sensors 2025, 25(20), 6394; https://doi.org/10.3390/s25206394 (registering DOI) - 16 Oct 2025
Abstract
With the rapid advancement of artificial intelligence and robotics, the integration of Large Language Models (LLMs) with 3D vision is emerging as a transformative approach to enhancing robotic sensing technologies. This convergence enables machines to perceive, reason, and interact with complex environments through [...] Read more.
With the rapid advancement of artificial intelligence and robotics, the integration of Large Language Models (LLMs) with 3D vision is emerging as a transformative approach to enhancing robotic sensing technologies. This convergence enables machines to perceive, reason, and interact with complex environments through natural language and spatial understanding, bridging the gap between linguistic intelligence and spatial perception. This review provides a comprehensive analysis of state-of-the-art methodologies, applications, and challenges at the intersection of LLMs and 3D vision, with a focus on next-generation robotic sensing technologies. We first introduce the foundational principles of LLMs and 3D data representations, followed by an in-depth examination of 3D sensing technologies critical for robotics. The review then explores key advancements in scene understanding, text-to-3D generation, object grounding, and embodied agents, highlighting cutting-edge techniques such as zero-shot 3D segmentation, dynamic scene synthesis, and language-guided manipulation. Furthermore, we discuss multimodal LLMs that integrate 3D data with touch, auditory, and thermal inputs, enhancing environmental comprehension and robotic decision-making. To support future research, we catalog benchmark datasets and evaluation metrics tailored for 3D-language and vision tasks. Finally, we identify key challenges and future research directions, including adaptive model architectures, enhanced cross-modal alignment, and real-time processing capabilities, which pave the way for more intelligent, context-aware, and autonomous robotic sensing systems. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

25 pages, 6408 KB  
Review
Application Prospects of Optical Fiber Sensing Technology in Smart Campus Construction: A Review
by Huanhuan Zhang, Xinli Zhai and Jing Sun
Photonics 2025, 12(10), 1026; https://doi.org/10.3390/photonics12101026 - 16 Oct 2025
Abstract
As smart campus construction continues to advance, traditional safety monitoring and environmental sensing systems are increasingly showing limitations in sensitivity, anti-interference capability, and deployment flexibility. Optical fiber sensing (OFS) technology, with its advantages of high sensitivity, passive operation, immunity to electromagnetic interference, and [...] Read more.
As smart campus construction continues to advance, traditional safety monitoring and environmental sensing systems are increasingly showing limitations in sensitivity, anti-interference capability, and deployment flexibility. Optical fiber sensing (OFS) technology, with its advantages of high sensitivity, passive operation, immunity to electromagnetic interference, and long-distance distributed sensing, provides a novel solution for real-time monitoring and early warning of critical campus infrastructure. This review systematically examines representative applications of OFS technology in smart campus scenarios, including structural health monitoring of academic buildings, laboratory environmental sensing, and intelligent campus security. By analyzing the technical characteristics of various types of optical fiber sensors, the paper explores emerging developments and future potential of OFS in supporting intelligent campus construction. Finally, the feasibility of building data acquisition, transmission, and visualization platforms based on OFS systems is discussed, highlighting their promising roles in campus safety operations, the integration of teaching and research, and intelligent equipment management. Full article
(This article belongs to the Special Issue Applications and Development of Optical Fiber Sensors)
Show Figures

Figure 1

17 pages, 2716 KB  
Article
A Study on the Performance Comparison of Brain MRI Image-Based Abnormality Classification Models
by Jinhyoung Jeong, Sohyeon Bang, Yuyeon Jung and Jaehyun Jo
Life 2025, 15(10), 1614; https://doi.org/10.3390/life15101614 - 16 Oct 2025
Abstract
We developed a model that classifies normal and abnormal brain MRI images. This study initially referenced a small-scale real patient dataset (98 normal and 155 abnormal MRI images) provided by the National Institute of Aging (NIA) to illustrate the class imbalance challenge. However, [...] Read more.
We developed a model that classifies normal and abnormal brain MRI images. This study initially referenced a small-scale real patient dataset (98 normal and 155 abnormal MRI images) provided by the National Institute of Aging (NIA) to illustrate the class imbalance challenge. However, all experiments and performance evaluations were conducted on a larger synthetic dataset (10,000 images; 5000 normal and 5000 abnormal) generated from the National Imaging System (NIS/AI Hub). Therefore, while the NIA dataset highlights the limitations of real-world data availability, the reported results are based exclusively on the synthetic dataset. In the preprocessing step, all MRI images were normalized to the same size, and data augmentation techniques such as rotation, translation, and flipping were applied to increase data diversity and reduce overfitting during training. Based on deep learning, we fine-tuned our own CNN model and a ResNet-50 transfer learning model using ImageNet pretrained weights. We also compared the performance of our model with traditional machine learning using SVM (RBF kernel) and random forest classifiers. Experimental results showed that the ResNet-50 transfer learning model achieved the best performance, achieving approximately 95% accuracy and a high F1 score on the test set, while our own CNN also performed well. In contrast, SVM and random forests showed relatively poor performance due to their inability to sufficiently learn the complex characteristics of the images. This study confirmed that deep learning techniques, including transfer learning, achieve excellent brain abnormality detection performance even with limited real-world medical data. These results highlight methodological potential but should be interpreted with caution, as further validation with real-world clinical MRI data is required before clinical applicability can be established. Full article
(This article belongs to the Section Radiobiology and Nuclear Medicine)
Show Figures

Figure 1

27 pages, 21611 KB  
Article
Aggregation in Ill-Conditioned Regression Models: A Comparison with Entropy-Based Methods
by Ana Helena Tavares, Ana Silva, Tiago Freitas, Maria Costa, Pedro Macedo and Rui A. da Costa
Entropy 2025, 27(10), 1075; https://doi.org/10.3390/e27101075 - 16 Oct 2025
Abstract
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned [...] Read more.
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned problems (due to collinearity issues). This work compares the performance of a recent approach based on normalized entropy, a concept from information theory and info-metrics, with bagging and magging, two well-established aggregation methods in the literature, providing valuable insights for applications in regression analysis with large-scale data. While the results reveal a similar performance between methods in terms of prediction accuracy, the approach based on normalized entropy largely outperforms the other methods in terms of precision accuracy, even considering a smaller number of groups and observations per group, which represents an important advantage in inference problems with large-scale data. This work also alerts for the risk of using the OLS estimator, particularly under collinearity scenarios, knowing that data scientists frequently use linear models as a simplified view of the reality in big data analysis, and the OLS estimator is routinely used in practice. Beyond the promising findings of the simulation study, our estimation and aggregation strategies show strong potential for real-world applications in fields such as econometrics, genomics, environmental sciences, and machine learning, where data challenges such as noise and ill-conditioning are persistent. Full article
Show Figures

Figure 1

21 pages, 3303 KB  
Article
Research on Intelligent Early Warning System and Cloud Platform for Rockburst Monitoring
by Tianhui Ma, Yongle Duan, Wenshuo Duan, Hongqi Wang, Chun’an Tang, Kaikai Wang and Guanwen Cheng
Appl. Sci. 2025, 15(20), 11098; https://doi.org/10.3390/app152011098 - 16 Oct 2025
Abstract
Rockburst disasters in deep underground engineering present significant safety hazards due to complex geological conditions and high in situ stresses. To address the limitations of traditional microseismic (MS) monitoring methods—namely, vulnerability to noise interference, low recognition accuracy, and limited computational efficiency—this study proposes [...] Read more.
Rockburst disasters in deep underground engineering present significant safety hazards due to complex geological conditions and high in situ stresses. To address the limitations of traditional microseismic (MS) monitoring methods—namely, vulnerability to noise interference, low recognition accuracy, and limited computational efficiency—this study proposes an intelligent real-time monitoring and early warning framework that integrates deep learning, MS monitoring, and Internet of Things (IoT) technologies. The methodology includes db4 wavelet-based signal denoising for preprocessing, an improved Gaussian Mixture Model for automated waveform recognition, a U-Net-based neural network for P-wave arrival picking, and a particle swarm optimization algorithm with Lagrange multipliers for event localization. Furthermore, a cloud-based platform is developed to support automated data processing, three-dimensional visualization, real-time warning dissemination, and multi-user access. Field application in a deep-buried railway tunnel in Southwest China demonstrates the system’s effectiveness, achieving an early warning accuracy of 87.56% during 767 days of continuous monitoring. Comparative verification further indicates that the fine-tuned neural network outperforms manual approaches in waveform picking and event identification. Overall, the proposed system provides a robust, scalable, and intelligent solution for rockburst hazard mitigation in deep underground construction. Full article
Back to TopTop