Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (238)

Search Parameters:
Keywords = metric locating sets

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 886 KiB  
Article
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
by Paweł Cybulski
J. Eye Mov. Res. 2025, 18(4), 35; https://doi.org/10.3390/jemr18040035 (registering DOI) - 7 Aug 2025
Abstract
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were [...] Read more.
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces. Full article
Show Figures

Figure 1

13 pages, 769 KiB  
Article
A Novel You Only Listen Once (YOLO) Deep Learning Model for Automatic Prominent Bowel Sounds Detection: Feasibility Study in Healthy Subjects
by Rohan Kalahasty, Gayathri Yerrapragada, Jieun Lee, Keerthy Gopalakrishnan, Avneet Kaur, Pratyusha Muddaloor, Divyanshi Sood, Charmy Parikh, Jay Gohri, Gianeshwaree Alias Rachna Panjwani, Naghmeh Asadimanesh, Rabiah Aslam Ansari, Swetha Rapolu, Poonguzhali Elangovan, Shiva Sankari Karuppiah, Vijaya M. Dasari, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
Sensors 2025, 25(15), 4735; https://doi.org/10.3390/s25154735 - 31 Jul 2025
Viewed by 283
Abstract
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low [...] Read more.
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low clinical value in diagnosis. Interpretation of the acoustic characteristics of BSs, i.e., using a phonoenterogram (PEG), may aid in diagnosing various GI conditions non-invasively. Use of artificial intelligence (AI) and improvements in computational analysis can enhance the use of PEGs in different GI diseases and lead to a non-invasive, cost-effective diagnostic modality that has not been explored before. The purpose of this work was to develop an automated AI model, You Only Listen Once (YOLO), to detect prominent bowel sounds that can enable real-time analysis for future GI disease detection and diagnosis. A total of 110 2-minute PEGs sampled at 44.1 kHz were recorded using the Eko DUO® stethoscope from eight healthy volunteers at two locations, namely, left upper quadrant (LUQ) and right lower quadrant (RLQ) after IRB approval. The datasets were annotated by trained physicians, categorizing BSs as prominent or obscure using version 1.7 of Label Studio Software®. Each BS recording was split up into 375 ms segments with 200 ms overlap for real-time BS detection. Each segment was binned based on whether it contained a prominent BS, resulting in a dataset of 36,149 non-prominent segments and 6435 prominent segments. Our dataset was divided into training, validation, and test sets (60/20/20% split). A 1D-CNN augmented transformer was trained to classify these segments via the input of Mel-frequency cepstral coefficients. The developed AI model achieved area under the receiver operating curve (ROC) of 0.92, accuracy of 86.6%, precision of 86.85%, and recall of 86.08%. This shows that the 1D-CNN augmented transformer with Mel-frequency cepstral coefficients achieved creditable performance metrics, signifying the YOLO model’s capability to classify prominent bowel sounds that can be further analyzed for various GI diseases. This proof-of-concept study in healthy volunteers demonstrates that automated BS detection can pave the way for developing more intuitive and efficient AI-PEG devices that can be trained and utilized to diagnose various GI conditions. To ensure the robustness and generalizability of these findings, further investigations encompassing a broader cohort, inclusive of both healthy and disease states are needed. Full article
(This article belongs to the Special Issue Biomedical Signals, Images and Healthcare Data Analysis: 2nd Edition)
Show Figures

Figure 1

32 pages, 8202 KiB  
Article
A Machine Learning-Based Method for Lithology Identification of Outcrops Using TLS-Derived Spectral and Geometric Features
by Yanlin Shao, Peijin Li, Ran Jing, Yaxiong Shao, Lang Liu, Kunpeng Zhao, Binqing Gan, Xiaolei Duan and Longfan Li
Remote Sens. 2025, 17(14), 2434; https://doi.org/10.3390/rs17142434 - 14 Jul 2025
Viewed by 271
Abstract
Lithological identification of outcrops in complex geological settings plays a crucial role in hydrocarbon exploration and geological modeling. To address the limitations of traditional field surveys, such as low efficiency and high risk, we proposed an intelligent lithology recognition method, SG-RFGeo, for terrestrial [...] Read more.
Lithological identification of outcrops in complex geological settings plays a crucial role in hydrocarbon exploration and geological modeling. To address the limitations of traditional field surveys, such as low efficiency and high risk, we proposed an intelligent lithology recognition method, SG-RFGeo, for terrestrial laser scanning (TLS) outcrop point clouds, which integrates spectral and geometric features. The workflow involves several key steps. First, lithological recognition units are created through regular grid segmentation. From these units, spectral reflectance statistics (e.g., mean, standard deviation, kurtosis, and other related metrics), and geometric morphological features (e.g., surface variation rate, curvature, planarity, among others) are extracted. Next, a double-layer random forest model is employed for lithology identification. In the shallow layer, the Gini index is used to select relevant features for a coarse classification of vegetation, conglomerate, and mud–sandstone. The deep-layer module applies an optimized feature set to further classify thinly interbedded sandstone and mudstone. Geological prior knowledge, such as stratigraphic attitudes, is incorporated to spatially constrain and post-process the classification results, enhancing their geological plausibility. The method was tested on a TLS dataset from the Yueyawan outcrop of the Qingshuihe Formation, located on the southern margin of the Junggar Basin in China. Results demonstrate that the integration of spectral and geometric features significantly improves classification performance, with the Macro F1-score increasing from 0.65 (with single-feature input) to 0.82. Further, post-processing with stratigraphic constraints boosts the overall classification accuracy to 93%, outperforming SVM (59.2%), XGBoost (67.8%), and PointNet (75.3%). These findings demonstrate that integrating multi-source features and geological prior constraints effectively addresses the challenges of lithological identification in complex outcrops, providing a novel approach for high-precision geological modeling and exploration. Full article
Show Figures

Figure 1

41 pages, 4123 KiB  
Article
Optimal D-STATCOM Operation in Power Distribution Systems to Minimize Energy Losses and CO2 Emissions: A Master–Slave Methodology Based on Metaheuristic Techniques
by Rubén Iván Bolaños, Cristopher Enrique Torres-Mancilla, Luis Fernando Grisales-Noreña, Oscar Danilo Montoya and Jesús C. Hernández
Sci 2025, 7(3), 98; https://doi.org/10.3390/sci7030098 - 11 Jul 2025
Viewed by 374
Abstract
In this paper, we address the problem of intelligent operation of Distribution Static Synchronous Compensators (D-STATCOMs) in power distribution systems to reduce energy losses and CO2 emissions while improving system operating conditions. In addition, we consider the entire set of constraints inherent [...] Read more.
In this paper, we address the problem of intelligent operation of Distribution Static Synchronous Compensators (D-STATCOMs) in power distribution systems to reduce energy losses and CO2 emissions while improving system operating conditions. In addition, we consider the entire set of constraints inherent in the operation of such networks in an environment with D-STATCOMs. To solve such a problem, we used three master–slave methodologies based on sequential programming methods. In the proposed methodologies, the master stage solves the problem of intelligent D-STATCOM operation using the continuous versions of the Monte Carlo (MC) method, the population-based genetic algorithm (PGA), and the Particle Swarm Optimizer (PSO). The slave stage, for its part, evaluates the solutions proposed by the algorithms to determine their impact on the objective functions and constraints representing the problem. This is accomplished by running an Hourly Power Flow (HPF) based on the method of successive approximations. As test scenarios, we employed the 33- and 69-node radial test systems, considering data on power demand and CO2 emissions reported for the city of Medellín in Colombia (as documented in the literature). Furthermore, a test system was adapted in this work to the demand characteristics of a feeder located in the city of Talca in Chile. This adaptation involved adjusting the conductors and voltage limits to include a test system with variations in power demand due to seasonal changes throughout the year (spring, winter, autumn, and summer). Demand curves were obtained by analyzing data reported by the local network operator, i.e., Compañía General de Electricidad. To assess the robustness and performance of the proposed optimization approach, each scenario was simulated 100 times. The evaluation metrics included average solution quality, standard deviation, and repeatability. Across all scenarios, the PGA consistently outperformed the other methods tested. Specifically, in the 33-node system, the PGA achieved a 24.646% reduction in energy losses and a 0.9109% reduction in CO2 emissions compared to the base case. In the 69-node system, reductions reached 26.0823% in energy losses and 0.9784% in CO2 emissions compared to the base case. Notably, in the case of the Talca feeder—particularly during summer, the most demanding season—the PGA yielded the most significant improvements, reducing energy losses by 33.4902% and CO2 emissions by 1.2805%. Additionally, an uncertainty analysis was conducted to validate the effectiveness and robustness of the proposed optimization methodology under realistic operating variability. A total of 100 randomized demand profiles for both active and reactive power were evaluated. The results demonstrated the scalability and consistent performance of the proposed strategy, confirming its effectiveness under diverse and practical operating conditions. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

13 pages, 1574 KiB  
Article
Multi-Stage Cascaded Deep Learning-Based Model for Acute Aortic Syndrome Detection: A Multisite Validation Study
by Joseph Chang, Kuan-Jung Lee, Ti-Hao Wang and Chung-Ming Chen
J. Clin. Med. 2025, 14(13), 4797; https://doi.org/10.3390/jcm14134797 - 7 Jul 2025
Viewed by 490
Abstract
Background: Acute Aortic Syndrome (AAS), encompassing aortic dissection (AD), intramural hematoma (IMH), and penetrating atherosclerotic ulcer (PAU), presents diagnostic challenges due to its varied manifestations and the critical need for rapid assessment. Methods: We developed a multi-stage deep learning model trained [...] Read more.
Background: Acute Aortic Syndrome (AAS), encompassing aortic dissection (AD), intramural hematoma (IMH), and penetrating atherosclerotic ulcer (PAU), presents diagnostic challenges due to its varied manifestations and the critical need for rapid assessment. Methods: We developed a multi-stage deep learning model trained on chest computed tomography angiography (CTA) scans. The model utilizes a U-Net architecture for aortic segmentation, followed by a cascaded classification approach for detecting AD and IMH, and a multiscale CNN for identifying PAU. External validation was conducted on 260 anonymized CTA scans from 14 U.S. clinical sites, encompassing data from four different CT manufacturers. Performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), were calculated with 95% confidence intervals (CIs) using Wilson’s method. Model performance was compared against predefined benchmarks. Results: The model achieved a sensitivity of 0.94 (95% CI: 0.88–0.97), specificity of 0.93 (95% CI: 0.89–0.97), and an AUC of 0.96 (95% CI: 0.94–0.98) for overall AAS detection, with p-values < 0.001 when compared to the 0.80 benchmark. Subgroup analyses demonstrated consistent performance across different patient demographics, CT manufacturers, slice thicknesses, and anatomical locations. Conclusions: This deep learning model effectively detects the full spectrum of AAS across diverse populations and imaging platforms, suggesting its potential utility in clinical settings to enable faster triage and expedite patient management. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

14 pages, 2070 KiB  
Article
Comparative Analysis of Machine/Deep Learning Models for Single-Step and Multi-Step Forecasting in River Water Quality Time Series
by Hongzhe Fang, Tianhong Li and Huiting Xian
Water 2025, 17(13), 1866; https://doi.org/10.3390/w17131866 - 23 Jun 2025
Viewed by 562
Abstract
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models [...] Read more.
There is a lack of a systematic comparison framework that can assess models in both single-step and multi-step forecasting situations while balancing accuracy, training efficiency, and prediction horizon. This study aims to evaluate the predictive capabilities of machine learning and deep learning models in water quality time series forecasting. It made use of 22-month data with a 4 h interval from two monitoring stations located in a tributary of the Pearl River. Six models, specifically Support Vector Regression (SVR), XGBoost, K-Nearest Neighbors (KNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) Network, Gated Recurrent Unit (GRU), and PatchTST, were employed in this study. In single-step forecasting, LSTM Network achieved superior accuracy for a univariate feature set and attained an overall 22.0% (Welch’s t-test, p = 3.03 × 10−7) reduction in Mean Squared Error (MSE) compared with the machine learning models (SVR, XGBoost, KNN), while RNN demonstrated significantly reduced training time. For a multivariate feature set, the deep learning models exhibited comparable accuracy but with no model achieving a significant increase in accuracy compared to the univariate scenario. The KNN model underperformed across error evaluation metrics, with the lowest accuracy, and the XGBoost model exhibited the highest computational complexity. In multi-step forecasting, the direct multi-step PatchTST model outperformed the iterated multi-step models (RNN, LSTM, GRU), with a reduced time-delay effect and a slower decrease in accuracy with increasing prediction length, but it still required specific adjustments to be better suited for the task of river water quality time series forecasting. The findings provide actionable guidelines for model selection, balancing predictive accuracy, training efficiency, and forecasting horizon requirements in environmental time series analysis. Full article
Show Figures

Figure 1

21 pages, 16197 KiB  
Article
SGDO-SLAM: A Semantic RGB-D SLAM System with Coarse-to-Fine Dynamic Rejection and Static Weighted Optimization
by Qiming Hu, Shuwen Wang, Nanxing Chen, Wei Li, Jiayu Yuan, Enhui Zheng, Guirong Wang and Weimin Chen
Sensors 2025, 25(12), 3734; https://doi.org/10.3390/s25123734 - 14 Jun 2025
Viewed by 467
Abstract
Vision sensor-based simultaneous localization and mapping (SLAM) systems are essential for mobile robots to locate and generate spatial models of their surroundings. However, the majority of visual SLAM systems assume static settings, leading to significant accuracy degradation in dynamic scenes. We present SGDO-SLAM, [...] Read more.
Vision sensor-based simultaneous localization and mapping (SLAM) systems are essential for mobile robots to locate and generate spatial models of their surroundings. However, the majority of visual SLAM systems assume static settings, leading to significant accuracy degradation in dynamic scenes. We present SGDO-SLAM, a real-time RGB-D semantic-aware SLAM framework, building upon ORB-SLAM2 to address non-static environments. Firstly, a multi-constraint dynamic rejection method from coarse to fine is proposed. The method starts with coarse rejection by combining semantic and geometric information, followed by detailed rejection using depth information, where static quality weights are quantified based on depth consistency constraints. The method achieves accurate dynamic scene perceptions and improves the accuracy of the system’s positioning. Then, a position optimization method driven by static quality weights is proposed, which prioritizes high-quality static features to enhance pose estimation. Finally, a visualized dense point cloud map is established. We performed experimental evaluations on the TUM RGB-D dataset and the Bonn dataset. The experimental results demonstrate that SGDO-SLAM reduces the absolute trajectory error performance metrics by 95% compared to the ORB-SLAM2 algorithm, while maintaining real-time efficiency and achieving state-of-the-art accuracy in dynamic scenarios. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 1236 KiB  
Article
Comparative Analysis of Dedicated and Randomized Storage Policies in Warehouse Efficiency Optimization
by Rana M. Saleh and Tamer F. Abdelmaguid
Eng 2025, 6(6), 119; https://doi.org/10.3390/eng6060119 - 1 Jun 2025
Viewed by 1061
Abstract
This paper examines the impact of two storage policies—dedicated storage (D-SLAP) and randomized storage (R-SLAP)—on warehouse operational efficiency. It integrates the Storage Location Assignment Problem (SLAP) with the unrelated parallel machine scheduling problem (UPMSP), which represents the scheduling of the material handling equipment [...] Read more.
This paper examines the impact of two storage policies—dedicated storage (D-SLAP) and randomized storage (R-SLAP)—on warehouse operational efficiency. It integrates the Storage Location Assignment Problem (SLAP) with the unrelated parallel machine scheduling problem (UPMSP), which represents the scheduling of the material handling equipment (MHE). This integration is intended to elucidate the interplay between storage strategies and scheduling performance. The considered evaluation metrics include transportation cost, average waiting time, and total tardiness, while accounting for product arrival and demand schedules, precedence constraints, and transportation expenses. Additionally, considerations such as MHE eligibility, resource requirements, and available storage locations are incorporated into the analysis. Given the complexity of the combined problem, a tailored Non-dominated Sorting Genetic Algorithm (NSGA-II) was developed to assess the performance of the two storage policies across various randomly generated test instances of differing sizes. Parameter tuning for the NSGA-II was conducted using the Taguchi method to identify optimal settings. Experimental and statistical analyses reveal that, for small-size instances, both policies exhibit comparable performance in terms of transportation cost and total tardiness, with R-SLAP demonstrating superior performance in reducing average waiting time. Conversely, results from large-size instances indicate that D-SLAP surpasses R-SLAP in optimizing waiting time and tardiness objectives, while R-SLAP achieves lower transportation cost. Full article
(This article belongs to the Special Issue Women in Engineering)
Show Figures

Figure 1

15 pages, 363 KiB  
Article
Promoting Mental Health in Adolescents Through Physical Education: Measuring Life Satisfaction for Comprehensive Development
by Santiago Gómez-Paniagua, Antonio Castillo-Paredes, Pedro R. Olivares and Jorge Rojo-Ramos
Children 2025, 12(5), 658; https://doi.org/10.3390/children12050658 - 21 May 2025
Viewed by 473
Abstract
Background: Life satisfaction serves as a preventive agent against various emotional, cognitive, and behavioral challenges, making it a crucial cognitive indicator of subjective well-being, particularly during adolescence. Accurately assessing life satisfaction is essential for understanding and promoting adolescent mental health, especially in applied [...] Read more.
Background: Life satisfaction serves as a preventive agent against various emotional, cognitive, and behavioral challenges, making it a crucial cognitive indicator of subjective well-being, particularly during adolescence. Accurately assessing life satisfaction is essential for understanding and promoting adolescent mental health, especially in applied settings such as physical education, which plays a key role in fostering psychological well-being and positive youth development. However, additional investigation is needed to confirm the tools used for this purpose. This study aimed to analyze the psychometric properties, metric invariance, and temporal stability of the Satisfaction with Life Scale (SWLS) in adolescents from a region in southeastern Spain. Thus, the present study sought to answer the following research questions: (1) Does the SWLS demonstrate adequate psychometric properties in an adolescent population? (2) Is the SWLS invariant across gender and residential environments? (3) Does the SWLS show adequate stability over time? Methods: A sample of 400 students was assessed using exploratory and confirmatory factor analyses, multigroup comparisons, and test–retest techniques. Results: The results showed significant differences in scale scores in the sex and demographic location variables. Also, a robust unifactorial model with five items demonstrated good performance in terms of goodness of fit and internal consistency. Furthermore, full metric invariance was observed across genders, while configural invariance was supported for residential environment. Concurrent validity analyses revealed significant associations with another unidimensional well-being measure, and temporal stability was confirmed through the intraclass correlation coefficient. Conclusions: The findings support the SWLS as a potentially valid, reliable, and time-effective tool for assessing adolescent life satisfaction. Its strong psychometric properties make it highly suitable for use in mental health research, longitudinal monitoring, and large-scale studies. Moreover, its ease of administration allows its integration into educational, clinical, community-based, and physical education contexts, offering insightful information for the creation of long-lasting mental health regulations and preventive measures meant to improve the well-being of adolescents. Notwithstanding these encouraging results, some restrictions must be noted. The sample was restricted to a single geographic area, and contextual or cultural factors may have an impact on how satisfied people are with their lives. Furthermore, response biases could have been introduced by using self-report measures. Full article
Show Figures

Figure 1

24 pages, 3707 KiB  
Article
Comparison of a Continuous Forest Inventory to an ALS-Derived Digital Inventory in Washington State
by Thomas Montzka, Steve Scharosch, Michael Huebschmann, Mark V. Corrao, Douglas D. Hardman, Scott W. Rainsford, Alistair M. S. Smith and The Confederated Tribes and Bands of the Yakama Nation
Remote Sens. 2025, 17(10), 1761; https://doi.org/10.3390/rs17101761 - 18 May 2025
Viewed by 527
Abstract
The monitoring and assessment of forest conditions has traditionally relied on continuous forest inventory (CFI) plots, where all plot trees are regularly measured at discrete locations, then plots are grouped as representative samples of forested areas via stand-based inventory expectations. Remote sensing data [...] Read more.
The monitoring and assessment of forest conditions has traditionally relied on continuous forest inventory (CFI) plots, where all plot trees are regularly measured at discrete locations, then plots are grouped as representative samples of forested areas via stand-based inventory expectations. Remote sensing data acquisitions, such as airborne laser scanning (ALS), are becoming more widely applied to operational forestry to derive similar stand-based inventories. Although ALS systems are widely applied to assess forest metrics associated with crowns and canopies, limited studies have compared ALS-derived digital inventories to CFI datasets. In this study, we conducted an analysis of over 1000 CFI plot locations on ~611,000 acres and compared it to a single-tree derived inventory. Inventory metrics from CFI data were forward modeled from 2016 to 2019 using the USDA Forest Service Forest Vegetation Simulator (FVS) to produce estimates of trees per acre (TPA), basal area (BA) per tree or per plot, basal area per acre (BAA), and volume per acre (VPA) and compared to the ALS-derived Digital Inventory® (DI) of 2019. The CFI data provided greater on-plot tree counts, BA, and volume compared to the DI when limited to trees ≥5 inches DBH. On-plot differences were less significant for taller trees and increasingly diverged for shorter trees (<20 feet tall) known to be less detectable by ALS. The CFI volume was found to be 44% higher than the ALS-derived DI suggesting mean volume per acre as derived from plot sampling methods may not provide accurate results when expanded across the landscape given variable forest conditions not captured during sampling. These results provide support that when used together, CFI and DI datasets represent a powerful set of tools within the forest management toolkit. Full article
(This article belongs to the Special Issue Remote Sensing and Lidar Data for Forest Monitoring)
Show Figures

Figure 1

22 pages, 8121 KiB  
Article
Field Investigation of Thermal Comfort and Indoor Air Quality Analysis Using a Multi-Zone Approach in a Tropical Hypermarket
by Kathleen Jo Lin Teh, Halim Razali and Chin Haw Lim
Buildings 2025, 15(10), 1677; https://doi.org/10.3390/buildings15101677 - 16 May 2025
Cited by 1 | Viewed by 587
Abstract
Indoor environmental quality (IEQ), encompassing thermal comfort and indoor air quality (IAQ), plays a crucial role in occupant well-being and operational performance. Although widely studied individually, integrating thermal comfort and IAQ assessments remains limited, particularly in large-scale tropical commercial settings. Hypermarkets, characterised by [...] Read more.
Indoor environmental quality (IEQ), encompassing thermal comfort and indoor air quality (IAQ), plays a crucial role in occupant well-being and operational performance. Although widely studied individually, integrating thermal comfort and IAQ assessments remains limited, particularly in large-scale tropical commercial settings. Hypermarkets, characterised by spatial heterogeneity and fluctuating occupancy, present challenges that conventional HVAC systems often fail to manage effectively. This study investigates thermal comfort and IAQ variability in a hypermarket located in Gombak, Malaysia, under tropical rainforest conditions based on the Köppen–Geiger climate classification, a widely used system for classifying the world’s climates. Environmental parameters were monitored using a network of IoT-enabled sensors across five functional zones during actual operations. Thermal indices (PMV, PPD) and IAQ metrics (CO2, TVOC, PM2.5, PM10) were analysed and benchmarked against ASHRAE 55 standards to assess spatial variations and occupant exposure. Results revealed substantial heterogeneity, with the cafeteria zone recording critical discomfort (PPD 93%, CO2 900 ppm, TVOC 1500 ppb) due to localised heat and insufficient ventilation. Meanwhile, the intermediate retail zone maintained near-optimal conditions (PPD 12%). Although findings are specific to this hypermarket, the integrated zone-based monitoring provides empirical insights that support the enhancement of IEQ assessment approaches in tropical commercial spaces. By characterising zone-specific thermal comfort and IAQ profiles, this study contributes valuable knowledge toward developing adaptive, occupant-centred HVAC strategies for complex retail environments in hot-humid climates. Full article
Show Figures

Figure 1

28 pages, 23164 KiB  
Article
Device-Driven Service Allocation in Mobile Edge Computing with Location Prediction
by Qian Zeng, Xiaobo Li, Yixuan Chen, Minghao Yang, Xingbang Liu, Yuetian Liu and Shiwei Xiu
Sensors 2025, 25(10), 3025; https://doi.org/10.3390/s25103025 - 11 May 2025
Viewed by 532
Abstract
With the rapid deployment of edge base stations and the widespread application of 5G technology, Mobile Edge Computing (MEC)has gradually transitioned from a theoretical concept to practical implementation, playing a key role in emerging human-machine interactions and innovative mobile applications. In the MEC [...] Read more.
With the rapid deployment of edge base stations and the widespread application of 5G technology, Mobile Edge Computing (MEC)has gradually transitioned from a theoretical concept to practical implementation, playing a key role in emerging human-machine interactions and innovative mobile applications. In the MEC environment, efficiently allocating services, effectively utilizing edge device resources, and ensuring timely service responses have become critical research topics. Existing studies often treat MEC service allocation as an offline strategy, where the real-time location of users is used as input, and static optimization is applied. However, this approach overlooks dynamic factors such as user mobility. To address this limitation, this paper constructs a model based on constraints, optimization objectives, and server connection methods, determines experimental parameters and evaluation metrics, and sets up an experimental framework. We propose an Edge Location Prediction Model (ELPM) suitable for the MEC scenario, which integrates Spatial-Temporal Graph Neural Networks and attention mechanisms. By leveraging attention parameters, ELPM acquires spatio-temporal adaptive weights, enabling accurate location predictions. We also design an improved service allocation strategy, MESDA, based on the Gray Wolf Optimization (GWO) algorithm. MESDA dynamically adjusts its exploration and exploitation components, and introduces a random factor to enhance the algorithm’s ability to determine the direction during later stages. To validate the effectiveness of the proposed methods, we conduct multiple controlled experiments focusing on both location prediction models and service allocation algorithms. The results show that, compared to the baseline methods, our approach achieves improvements of 2.56%, 5.29%, and 2.16% in terms of the average user connection to edge servers, average service deployment cost, and average service allocation execution time, respectively, demonstrating the superiority and feasibility of the proposed methods. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

41 pages, 6895 KiB  
Article
IceBench: A Benchmark for Deep-Learning-Based Sea-Ice Type Classification
by Samira Alkaee Taleghan, Andrew P. Barrett, Walter N. Meier and Farnoush Banaei-Kashani
Remote Sens. 2025, 17(9), 1646; https://doi.org/10.3390/rs17091646 - 6 May 2025
Viewed by 737
Abstract
Sea ice plays a critical role in the global climate system and maritime operations, making timely and accurate classification essential. However, traditional manual methods are time-consuming, costly, and have inherent biases. Automating sea-ice type classification addresses these challenges by enabling faster, more consistent, [...] Read more.
Sea ice plays a critical role in the global climate system and maritime operations, making timely and accurate classification essential. However, traditional manual methods are time-consuming, costly, and have inherent biases. Automating sea-ice type classification addresses these challenges by enabling faster, more consistent, and scalable analysis. While both traditional and deep-learning approaches have been explored, deep-learning models offer a promising direction for improving efficiency and consistency in sea-ice classification. However, the absence of a standardized benchmark and comparative study prevents a clear consensus on the best-performing models. To bridge this gap, we introduce IceBench, a comprehensive benchmarking framework for sea-ice type classification. Our key contributions are three-fold: First, we establish the IceBench benchmarking framework, which leverages the existing AI4Arctic Sea Ice Challenge Dataset as a standardized dataset, incorporates a comprehensive set of evaluation metrics, and includes representative models from the entire spectrum of sea-ice type-classification methods categorized in two distinct groups, namely pixel-based classification methods and patch-based classification methods. IceBench is open-source and allows for convenient integration and evaluation of other sea-ice type-classification methods, hence facilitating comparative evaluation of new methods and improving reproducibility in the field. Second, we conduct an in-depth comparative study on representative models to assess their strengths and limitations, providing insights for both practitioners and researchers. Third, we leverage IceBench for systematic experiments addressing key research questions on model transferability across seasons (time) and locations (space), data downsampling, and preprocessing strategies. By identifying the best-performing models under different conditions, IceBench serves as a valuable reference for future research and a robust benchmarking framework for the field. Full article
Show Figures

Graphical abstract

18 pages, 10232 KiB  
Article
Evaluation of Landscape Soil Quality in Different Types of Pisha Sandstone Areas on Loess Plateau
by Lei Huang and Liangyi Rao
Forests 2025, 16(4), 699; https://doi.org/10.3390/f16040699 - 18 Apr 2025
Viewed by 500
Abstract
Severe soil erosion and land productivity degradation caused by inadequate vegetation cover pose significant challenges to regional ecological protection and sustainable development. To assess changes and variations in soil quality, three sample areas with different distinct texture characteristics were selected from the Pisha [...] Read more.
Severe soil erosion and land productivity degradation caused by inadequate vegetation cover pose significant challenges to regional ecological protection and sustainable development. To assess changes and variations in soil quality, three sample areas with different distinct texture characteristics were selected from the Pisha sandstone region located northeastern of the Loess Plateau. The total data set (TDS) was determined through sampling experiments, and the minimum data set (MDS) was established using principal component analysis. A Random Forest (RF) machine learning model was applied to predict soil quality distribution. The prediction indices were derived from soil analysis dimensions, mean weight diameter measured via wet sieving, and soil enrichment ratio obtained from slope erosion experiments conducted at the corresponding sampling points. During the RF modeling process, 80% of the total soil quality index (SQI), calculated using TDS and MDS evaluation methods, was allocated for model training. The results indicated that pH, ammonia nitrogen, bulk density, silt content, clay content, soil water content, hygroscopic water content, total phosphorus, soluble calcium, and actinomycetes were identified as the optimal predictors for SQI. Furthermore, the RF model demonstrated superior performance in predicting the regional distribution of SQI, with evaluation metrics including (R2 = 0.76–0.78, RMSE = 0.03–0.06, MAE = 0.04–0.09). This study confirms the reliability of RF in simulating SQI within the study area and highlights that, in regions undergoing extensive vegetation restoration and with limited sampling conditions, experimental measurements of soil particles and sediment parameters provide an effective approach for evaluating SQI. Full article
(This article belongs to the Section Forest Soil)
Show Figures

Figure 1

24 pages, 6012 KiB  
Article
Using Baited Remote Underwater Video Surveys (BRUVs) to Analyze the Structure of Predators in Guanahacabibes National Park, Cuba
by Dorka Cobián-Rojas, Jorge Angulo-Valdés, Pedro Pablo Chevalier-Monteagudo, Lázaro Valentín García-López, Susana Perera-Valderrama, Joán Irán Hernández-Albernas and Hansel Caballero-Aragón
Fishes 2025, 10(4), 169; https://doi.org/10.3390/fishes10040169 - 10 Apr 2025
Viewed by 1338
Abstract
The reef fish communities of the Guanahacabibes National Park have been studied for 20 years using various methodologies that have allowed us to understand aspects of their diversity and structure. However, due to gaps in information about the abundance and distribution of mesopredators [...] Read more.
The reef fish communities of the Guanahacabibes National Park have been studied for 20 years using various methodologies that have allowed us to understand aspects of their diversity and structure. However, due to gaps in information about the abundance and distribution of mesopredators (big fish and sharks), a new study was conducted in 2017 to determine their structure, explore the influence of different factors on their spatial variability, and evaluate their behavior. To achieve this, the Baited Remote Underwater Video Surveys (BRUVs) methodology was successfully applied, locating a single set of BRUVs at 90 sites distributed across 9 sectors of the park’s functional zoning. Variability in mesopredator metrics and their potential prey was assessed through a PERMANOVA analysis; a distance-based linear model (DISTLM) was used to explore the relationship between mesopredator abundance and biological, abiotic, and condition variables; and animal behavior was classified as incidental, cautious, or aggressive. A total of 64 fish species were identified, 7 of which were mesopredators, and 3 were sharks. An uneven distribution and abundance were observed among sectors, with the most abundant mesopredators being Carcharhinus perezi, Sphyraena barracuda, and Mycteroperca bonaci. Mesopredator abundance was more closely related to the condition of zone use and its regulations than to biological and abiotic variables. Sharks were more abundant in strictly protected areas, which coincided with relatively murky waters and stronger currents. More than 50% of the observed sharks displayed exploratory and aggressive behavior towards the bait basket. The analyzed metrics validate the effectiveness of the management of the protected area and suggest the presence of healthy and resilient mesopredator fish communities. Full article
(This article belongs to the Special Issue Movement Ecology and Conservation of Large Marine Fishes (and Sharks))
Show Figures

Figure 1

Back to TopTop