Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,315)

Search Parameters:
Keywords = iterative enhancement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 870 KB  
Article
A Matrix-Based Analytical Approach for Reliability Assessment of Mesh Distribution Networks
by Shuitian Li, Lixiang Lin, Ya Chen, Chang Xu, Chenxi Zhang, Yuanliang Zhang, Fengzhang Luo and Jiacheng Fo
Energies 2025, 18(20), 5508; https://doi.org/10.3390/en18205508 (registering DOI) - 18 Oct 2025
Abstract
To address the limitations of conventional reliability assessment methods in handling mesh distribution networks with flexible operation characteristics and complex topologies, namely their poor adaptability and low computational efficiency, this paper proposes a matrix-based analytical approach for reliability assessment of mesh distribution networks. [...] Read more.
To address the limitations of conventional reliability assessment methods in handling mesh distribution networks with flexible operation characteristics and complex topologies, namely their poor adaptability and low computational efficiency, this paper proposes a matrix-based analytical approach for reliability assessment of mesh distribution networks. First, a network configuration centered on the soft open points (SOP) is established. Through multi-feeder interconnection and flexible power flow control, a topology capable of fast fault transfer and service restoration is formed. Second, based on the restoration modes of load nodes under fault scenarios, three types of fault incidence matrices (FIM) are proposed. By means of matrix algebra, explicit analytical expressions are derived for the relationships among equipment failure probability, duration, impact range, and reliability indices. This overcomes the drawbacks of iterative search in conventional reliability assessments, significantly improving efficiency while ensuring accuracy. Finally, a modified 44 bus Taiwan test system is used for reliability assessment to verify the effectiveness of the proposed method. The results demonstrate that the proposed matrix-based analytical reliability assessment method enables explicit analytical calculation of both system-level and load-level reliability indices in mesh distribution networks, providing effective support for planning and operational optimization to enhance reliability. Full article
Show Figures

Figure 1

21 pages, 6547 KB  
Article
A High-Resolution Sea Ice Concentration Retrieval from Ice-WaterNet Using Sentinel-1 SAR Imagery in Fram Strait, Arctic
by Tingting Zhu, Xiangbin Cui and Yu Zhang
Remote Sens. 2025, 17(20), 3475; https://doi.org/10.3390/rs17203475 - 17 Oct 2025
Abstract
High spatial resolution sea ice concentration (SIC) is crucial for global climate and marine activity. However, retrieving high spatial resolution SIC from passive microwave sensors is challenging due to the trade-off between spatial resolution and atmospheric contamination. Our study develops the Ice-WaterNet framework, [...] Read more.
High spatial resolution sea ice concentration (SIC) is crucial for global climate and marine activity. However, retrieving high spatial resolution SIC from passive microwave sensors is challenging due to the trade-off between spatial resolution and atmospheric contamination. Our study develops the Ice-WaterNet framework, a novel superpixel-based deep learning model that integrates Conditional Random Fields (CRF) with a dual-attention U-Net to enhance ice–water classification in Synthetic Aperture Radar (SAR) imagery. The Ice-WaterNet model has been extensively tested on 2735 Sentinel-1 dual-polarized SAR images from 2021 to 2023, covering both winter and summer seasons in the Fram Strait. To tackle the complex surface features during the melt season, wind-roughened open water, and varying ice floe sizes, a superpixel strategy is employed to efficiently reduce classification uncertainty. Uncertain superpixels identified by CRF are iteratively refined using the U-Net attention mechanism. Experimental results demonstrate that Ice-WaterNet achieves significant improvements in classification accuracy, outperforming CRF and U-Net by 3.375% in Intersection over Union (IoU) and 3.09% in F1-score during the melt season, and by 1.96 in IoU and 1.75 in F1-score during the freeze season. The derived high-resolution SIC products, updated every two days, were evaluated against Met Norway ice charts and compared with ASI from AMSR-2 and SSM/I, showing a substantial reduction in misclassification in marginal ice zones, particularly under melting conditions. These findings underscore the potential of Ice-WaterNet in supporting precise sea ice monitoring and climate change research. Full article
Show Figures

Figure 1

25 pages, 2343 KB  
Article
A Multi-Objective Simulation–Optimization Framework for Emergency Department Efficiency Using RSM and Goal Programming
by Felipe Baesler, Oscar Cornejo, Carlos Obreque, Eric Forcael and Rudy Carrasco
Systems 2025, 13(10), 912; https://doi.org/10.3390/systems13100912 - 17 Oct 2025
Abstract
This study presents a novel approach that integrates Discrete Event Simulation (DES) with Design of Experiments (DOE) techniques, framed within a stochastic optimization context and guided by a multi-objective goal programming methodology. The focus is on enhancing the operational efficiency of an emergency [...] Read more.
This study presents a novel approach that integrates Discrete Event Simulation (DES) with Design of Experiments (DOE) techniques, framed within a stochastic optimization context and guided by a multi-objective goal programming methodology. The focus is on enhancing the operational efficiency of an emergency department (ED), illustrated through a real-world case study conducted in a Chilean hospital. The methodology employs Response Surface Methodology (RSM) to explore and optimize the impact of four critical resources: physicians, nurses, rooms, and radiologists. The response variable, formulated as a goal programming function, captures the aggregated patient flow time across four representative care tracks. The optimization process proceeded iteratively: early stages relied on linear approximations to identify promising improvement directions, while later phases applied a central composite design to model nonlinear interactions through a quadratic response surface. This progression revealed complex interdependencies among resources, ultimately leading to a local optimum. The proposed approach achieved a 50% reduction in the aggregated objective function and improved individual patient flow times by 7% to 26%. Compared to traditional metaheuristic methods, this simulation–optimization framework offers a computationally efficient alternative, particularly valuable when the simulation model is complex and resource-intensive. These findings underscore the value of combining simulation, RSM, and multi-objective optimization to support data-driven decision-making in complex healthcare settings. The methodology not only improves ED performance but also offers a flexible and scalable framework adaptable to other clinical environments seeking resource optimization and operational improvement. Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

39 pages, 2106 KB  
Article
Exploring the Use of AI to Optimize the Evaluation of a Faculty Training Program
by Alexandra Míguez-Souto, María Ángeles Gutiérrez García and José Luis Martín-Núñez
Educ. Sci. 2025, 15(10), 1394; https://doi.org/10.3390/educsci15101394 - 17 Oct 2025
Abstract
This study examines the potential of the AI chatbot ChatGPT-4o to support human-centered tasks such as qualitative research analysis. It focuses on a case study involving an initial university teaching training program at the Universidad Politécnica de Madrid (UPM), evaluated through student feedback. [...] Read more.
This study examines the potential of the AI chatbot ChatGPT-4o to support human-centered tasks such as qualitative research analysis. It focuses on a case study involving an initial university teaching training program at the Universidad Politécnica de Madrid (UPM), evaluated through student feedback. The findings indicate that ChatGPT can assist in the qualitative analysis of student assessments by identifying specific issues and suggesting possible solutions. However, expert oversight remains necessary as the tool lacks a full contextual understanding of the actions evaluated. The study concludes that AI systems like ChatGPT offer powerful means to complement complex human-centered tasks and anticipates their growing role in the evaluation of formative programs. By examining ChatGPT’s performance in this context, the study lays the groundwork for prototyping a customized automated system built on the insights gained here, capable of assessing program outcomes and supporting iterative improvements throughout each module, with the ultimate goal of enhancing the quality of the training program Full article
(This article belongs to the Topic AI Trends in Teacher and Student Training)
35 pages, 4244 KB  
Article
A Unified Fusion Framework with Robust LSA for Multi-Source InSAR Displacement Monitoring
by Kui Yang, Li Yan, Jun Liang and Xiaoye Wang
Remote Sens. 2025, 17(20), 3469; https://doi.org/10.3390/rs17203469 - 17 Oct 2025
Abstract
Time-series Interferometric Synthetic Aperture Radar (InSAR) techniques encounter substantial reliability challenges, primarily due to the presence of gross errors arising from phase unwrapping failures. These errors propagate through the processing chain and adversely affect displacement estimation accuracy, particularly in the case of a [...] Read more.
Time-series Interferometric Synthetic Aperture Radar (InSAR) techniques encounter substantial reliability challenges, primarily due to the presence of gross errors arising from phase unwrapping failures. These errors propagate through the processing chain and adversely affect displacement estimation accuracy, particularly in the case of a small number of SAR datasets. This study presents a unified data fusion framework designed to enhance the detection of gross errors in multi-source InSAR observations, incorporating a robust Least Squares Adjustment (LSA) methodology. The proposed framework develops a comprehensive mathematical model that integrates the fusion of multi-source InSAR data with robust LSA analysis, thereby establishing a theoretical foundation for the integration of heterogeneous datasets. Then, a systematic, reliability-driven data fusion workflow with robust LSA is developed, which synergistically combines Multi-Temporal InSAR (MT-InSAR) processing, homonymous Persistent Scatterer (PS) set generation, and iterative Baarda’s data snooping based on statistical hypothesis testing. This workflow facilitates the concurrent localization of gross errors and optimization of displacement parameters within the fusion process. Finally, the framework is rigorously evaluated using datasets from Radarsat-2 and two Sentinel-1 acquisition campaigns over the Tianjin Binhai New Area, China. Experimental results indicate that gross errors were successfully identified and removed from 11.1% of the homonymous PS sets. Following the robust LSA application, vertical displacement estimates exhibited a Root Mean Square Error (RMSE) of 5.7 mm/yr when compared to high-precision leveling data. Furthermore, a localized analysis incorporating both leveling validation and time series comparison was conducted in the Airport Economic Zone, revealing a substantial 42.5% improvement in accuracy compared to traditional Ordinary Least Squares (OLS) methodologies. Reliability assessments further demonstrate that the integration of multiple InSAR datasets significantly enhances both internal and external reliability metrics compared to single-source analyses. This study underscores the efficacy of the proposed framework in mitigating errors induced by phase unwrapping inaccuracies, thereby enhancing the robustness and credibility of InSAR-derived displacement measurements. Full article
(This article belongs to the Special Issue Applications of Radar Remote Sensing in Earth Observation)
12 pages, 3358 KB  
Article
High-Fidelity MicroCT Reconstructions of Cardiac Devices Enable Patient-Specific Simulation for Structural Heart Interventions
by Zhongkai Zhu, Yaojia Zhou, Yong Chen, Yong Peng, Mao Chen and Yuan Feng
J. Clin. Med. 2025, 14(20), 7341; https://doi.org/10.3390/jcm14207341 - 17 Oct 2025
Abstract
Background/Objective: Precise preprocedural planning is essential for the safety and efficacy of structural heart interventions. Conventional imaging modalities, while informative, do not allow for direct and accurate visualization, limiting procedural predictability. We aimed to develop and validate a high-resolution micro-computed tomography (microCT)-based [...] Read more.
Background/Objective: Precise preprocedural planning is essential for the safety and efficacy of structural heart interventions. Conventional imaging modalities, while informative, do not allow for direct and accurate visualization, limiting procedural predictability. We aimed to develop and validate a high-resolution micro-computed tomography (microCT)-based reverse modeling workflow that integrates digital reconstructions of metallic cardiac devices into patient imaging datasets, enabling accurate, patient-specific virtual simulation for procedural planning. Methods: Clinical-grade transcatheter heart valves, septal defect occluders, patent ductus arteriosus occluders, left atrial appendage closure devices, and coronary stents were scanned using microCT (36.9 μm resolution). Agreement was assessed by intra-class correlation coefficients (ICC) and Bland–Altman analyses. Device geometries were reconstructed into 3D stereolithography files and virtually implanted within multislice CT datasets using dedicated software. Results: Devices were successfully reverse-modeled with high geometric fidelity, showing negligible dimensional deviations from manufacturer specifications (mean ΔDistance range: −0.20 to +0.20 mm). Simulated measurements demonstrated excellent concordance with postprocedural imaging (ICC 0.90–0.96). The workflow accurately predicted clinically relevant parameters such as valve-to-coronary distances and implantation depths. Notably, preprocedural simulation identified a case at high risk of coronary obstruction, confirmed clinically and managed successfully. Conclusions: The microCT-based reverse modeling workflow offers a rapid, reproducible, and clinically relevant method for patient-specific simulation in structural heart interventions. By preserving anatomical fidelity and providing detailed device–tissue spatial visualization, this approach enhances preprocedural planning accuracy, risk stratification, and procedural safety. Its resource-efficient digital nature facilitates broad adoption and iterative simulation. Full article
(This article belongs to the Special Issue Clinical Insights and Advances in Structural Heart Disease)
Show Figures

Figure 1

21 pages, 386 KB  
Article
DKASQL: Dynamic Knowledge Adaptation for Domain-Specific Text-to-SQL
by Huaxing Bian, Guanrong Li, Yifan Wang, Qinghan Fu, Jian Shen, Xixian Yao and Zhen Wu
Appl. Sci. 2025, 15(20), 11121; https://doi.org/10.3390/app152011121 - 16 Oct 2025
Abstract
Text2SQL aims to translate natural language queries into structured query language (SQL). LLM-based Text2SQL methods have gradually become mainstream because of their strong capabilities in language understanding and transformation. However, in real-world scenarios with non-public or limited data resources, these methods still face [...] Read more.
Text2SQL aims to translate natural language queries into structured query language (SQL). LLM-based Text2SQL methods have gradually become mainstream because of their strong capabilities in language understanding and transformation. However, in real-world scenarios with non-public or limited data resources, these methods still face challenges such as insufficient domain knowledge, SQL generation that violates domain-specific constraints, and even hallucination issues. To address these challenges, this paper proposes DKASQL, a domain-specific Text2SQL method based on dynamic knowledge adaptation. The approach features an extraction module, a generation module, and an LLM-based verification module. Through an iterative “extraction–verification” and “generation–verification” mechanism, it dynamically updates the required knowledge, effectively improving both domain knowledge acquisition and the alignment between generated SQL and domain knowledge. To enhance efficiency, DKASQL also incorporates a memory storage mechanism that automatically retains commonly used domain knowledge to reduce iteration overhead. Experiments on the open-source multi-domain dataset BIRD and the ElecSQL dataset collected from the power-grid supply-chain domain show that DKASQL achieves significant performance improvements. With 7B and 32B base models, DKASQL achieves performance comparable to much larger models such as GPT-4o mini and DeepSeek-V3, with less computational overhead. For instance, on ElecSQL, DKASQL improves the execution success rate (ESR) by up to +26.9% and the result accuracy (RA) by up to +8.8% over GPT-4o mini, highlighting its effectiveness in domain-specific Text2SQL tasks. Full article
(This article belongs to the Special Issue Applications of Data Science and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3038 KB  
Article
A Multi-Objective Metaheuristic and Multi-Armed Bandit Hybrid-Based Multi-Corridor Coupled TTC Calculation Method
by Zengjie Sun, Wenle Song, Lei Wang and Jiahao Zhang
Electronics 2025, 14(20), 4075; https://doi.org/10.3390/electronics14204075 - 16 Oct 2025
Abstract
The calculation of Total Transfer Capability (TTC) for transmission corridors serves as the foundation for security region determination and electricity market transactions. However, existing TTC methods often neglect corridor correlations, leading to overly optimistic results. TTC computation involves complex stability verification and requires [...] Read more.
The calculation of Total Transfer Capability (TTC) for transmission corridors serves as the foundation for security region determination and electricity market transactions. However, existing TTC methods often neglect corridor correlations, leading to overly optimistic results. TTC computation involves complex stability verification and requires enumerating numerous renewable energy operation scenarios to establish security boundaries, exhibiting high non-convexity and nonlinearity that challenge gradient-based iterative algorithms in approaching global optima. Furthermore, practical power systems feature coupled corridor effects, transforming multi-corridor TTC into a complex Pareto frontier search problem. This paper proposes a MOEA/D-FRRMAB (Fitness–Rate–Reward Multi-Armed Bandit)-based method featuring: (1) a TTC model incorporating transient angle stability constraints, steady-state operational limits, and inter-corridor power interactions and (2) a decomposition strategy converting the multi-objective problem into subproblems, enhanced by MOEA/D-FRRMAB for improved Pareto front convergence and diversity. IEEE 39-bus tests demonstrate superior solution accuracy and diversity, providing dispatch centers with more reliable multi-corridor TTC strategies. Full article
Show Figures

Figure 1

21 pages, 2277 KB  
Article
Computation Offloading and Resource Allocation Strategy Considering User Mobility in Multi-UAV Assisted Semantic Communication Networks
by Wenxi Han, Yu Du, Yijun Guo, Jianjun Hao and Xiaoshijie Zhang
Electronics 2025, 14(20), 4067; https://doi.org/10.3390/electronics14204067 - 16 Oct 2025
Viewed by 52
Abstract
Multi-unmanned aerial vehicle (UAV)-assisted communication is a critical technology for the low-altitude economy, supporting applications from logistics to emergency response. Semantic communication effectively enhances transmission efficiency and improves the communication performance of multi-UAV-assisted systems. Existing research on multi-UAV semantic communication networks predominantly assumes [...] Read more.
Multi-unmanned aerial vehicle (UAV)-assisted communication is a critical technology for the low-altitude economy, supporting applications from logistics to emergency response. Semantic communication effectively enhances transmission efficiency and improves the communication performance of multi-UAV-assisted systems. Existing research on multi-UAV semantic communication networks predominantly assumes static ground devices, overlooking computation offloading and resource allocation challenges when ground devices are mobile. This overlooks the critical challenge of dynamically managing computation offloading and resources for mobile users, whose varying channel conditions and semantic compression needs directly impact system performance. To address this gap, this paper proposes a multi-UAV-assisted semantic communication model that novelly integrates user mobility with adaptive semantic compression, formulating a joint optimization problem for computation offloading and resource allocation. The objective is to minimize the maximum task processing latency through the joint optimization of UAV–device association, UAV trajectories, transmission power, task offloading ratios, and semantic compression depth. To solve this problem, we design a MAPPO-APSO algorithm integrating alternating iteration, multi-agent proximal policy optimization (MAPPO), and adaptive particle swarm optimization (APSO). Simulation results demonstrate that the proposed algorithm reduces the maximum task latency and system energy consumption by up to 20.7% and 16.1%, respectively, while maintaining transmission performance and outperforming benchmark approaches. Full article
(This article belongs to the Special Issue Recent Advances in Semantic Communications and Networks)
Show Figures

Figure 1

23 pages, 7004 KB  
Article
The Transformation of West Bay Area, Doha’s Business Center, Through Transit-Oriented Development
by Raffaello Furlan, Reem Awwaad, Alaa Alrababaa and Hatem Ibrahim
Sustainability 2025, 17(20), 9154; https://doi.org/10.3390/su17209154 - 16 Oct 2025
Viewed by 191
Abstract
Urbanization has posed significant challenges to cities globally, including urban sprawl, traffic congestion, reduced livability, and poor walkability. In Doha, Qatar’s capital, these issues are particularly pronounced in the West Bay Central Business District (CBD). Transit-Oriented Development (TOD) is widely recognized as a [...] Read more.
Urbanization has posed significant challenges to cities globally, including urban sprawl, traffic congestion, reduced livability, and poor walkability. In Doha, Qatar’s capital, these issues are particularly pronounced in the West Bay Central Business District (CBD). Transit-Oriented Development (TOD) is widely recognized as a key strategy to advance sustainable urbanism and mitigate such challenges. This study employs the Integrated Modification Methodology (IMM) to systematically assess the urban design and spatial configuration of West Bay through observational analysis. The research aims to reassess the urban form and enhance transit integration through a multi-stage, iterative process, focusing on critical determinants such as compactness, complexity, and connectivity. The analysis is structured around five essential design dimensions: (i) walkability, (ii) ground-level land use balance, (iii) mixed-use and public spaces, (iv) inter-modality and transport hubs, and (v) the public transportation network. Findings reveal key urban design deficiencies, including limited intermodal connectivity, insufficient green open spaces, and a lack of diverse land use around the metro station. To address these gaps, the study proposes a set of context-sensitive policy and design guidelines to support TOD-based regeneration. This research contributes directly to SDG 11: Sustainable Cities and Communities, and supports SDG 9 and SDG 13 through its emphasis on infrastructure integration and climate-responsive planning. The findings offer practical insights for urban planners, developers, and policymakers engaged in sustainable urban transformation. Full article
(This article belongs to the Section Development Goals towards Sustainability)
Show Figures

Figure 1

25 pages, 3867 KB  
Article
Edge Computing Task Offloading Algorithm Based on Distributed Multi-Agent Deep Reinforcement Learning
by Hui Li, Zhilong Zhu, Yingying Li, Wanwei Huang and Zhiheng Wang
Electronics 2025, 14(20), 4063; https://doi.org/10.3390/electronics14204063 - 15 Oct 2025
Viewed by 233
Abstract
As an important supplement to ground computing, edge computing can effectively alleviate the computational burden on ground systems. In the context of integrating edge computing with low-Earth-orbit satellite networks, this paper proposes an edge computing task offloading algorithm based on distributed multi-agent deep [...] Read more.
As an important supplement to ground computing, edge computing can effectively alleviate the computational burden on ground systems. In the context of integrating edge computing with low-Earth-orbit satellite networks, this paper proposes an edge computing task offloading algorithm based on distributed multi-agent deep reinforcement learning (DMADRL) to address the challenges of task offloading, including low transmission rates, low task completion rates, and high latency. Firstly, a Ground–UAV–LEO (GUL) three-layer architecture is constructed to improve offloading transmission rate. Secondly, the task offloading problem is decomposed into two sub-problems: offloading decisions and resource allocation. The former is addressed using a distributed multi-agent deep Q-network, where the problem is formulated as a Markov decision process. The Q-value estimation is iteratively optimized through the online and target networks, enabling the agent to make autonomous decisions based on ground and satellite load conditions, utilize the experience replay buffer to store samples, and achieve global optimization via global reward feedback. The latter employs the gradient descent method to dynamically update the allocation strategy based on the accumulated task data volume and the remaining resources, while adjusting the allocation through iterative convergence error feedback. Simulation results demonstrate that the proposed algorithm increases the average transmission rate by 21.7%, enhances the average task completion rate by at least 22.63% compared with benchmark algorithms, and reduces the average task processing latency by at least 11.32%, thereby significantly improving overall system performance. Full article
Show Figures

Figure 1

20 pages, 4701 KB  
Article
FMCW LiDAR Nonlinearity Compensation Based on Deep Reinforcement Learning with Hybrid Prioritized Experience Replay
by Zhiwei Li, Ning Wang, Yao Li, Jiaji He and Yiqiang Zhao
Photonics 2025, 12(10), 1020; https://doi.org/10.3390/photonics12101020 - 15 Oct 2025
Viewed by 95
Abstract
Frequency-modulated continuous-wave (FMCW) LiDAR systems are extensively utilized in industrial metrology, autonomous navigation, and geospatial sensing due to their high precision and resilience to interference. However, the intrinsic nonlinear dynamics of laser systems introduce significant distortion, adversely affecting measurement accuracy. Although conventional iterative [...] Read more.
Frequency-modulated continuous-wave (FMCW) LiDAR systems are extensively utilized in industrial metrology, autonomous navigation, and geospatial sensing due to their high precision and resilience to interference. However, the intrinsic nonlinear dynamics of laser systems introduce significant distortion, adversely affecting measurement accuracy. Although conventional iterative pre-distortion correction methods can effectively mitigate nonlinearities, their long-term reliability is compromised by factors such as temperature-induced drift and component aging, necessitating periodic recalibration. In light of recent advances in artificial intelligence, deep reinforcement learning (DRL) has emerged as a promising approach to adaptive nonlinear compensation. By continuously interacting with the environment, DRL agents can dynamically modify correction strategies to accommodate evolving system behaviors. Nonetheless, existing DRL-based methods often exhibit limited adaptability in rapidly changing nonlinear contexts and are constrained by inefficient uniform experience replay mechanisms that fail to emphasize critical learning samples. To address these limitations, this study proposes an enhanced Soft Actor-Critic (SAC) algorithm incorporating a hybrid prioritized experience replay framework. The prioritization mechanism integrates modulation frequency (MF) error and temporal difference (TD) error, enabling the algorithm to dynamically reconcile short-term nonlinear perturbations with long-term optimization goals. Furthermore, a time-varying delayed experience (TDE) injection strategy is introduced, which adaptively modulates data storage intervals based on the rate of change in modulation frequency error, thereby improving data relevance, enhancing sample diversity, and increasing training efficiency. Experimental validation demonstrates that the proposed method achieves superior convergence speed and stability in nonlinear correction tasks for FMCW LiDAR systems. The residual nonlinearity of the upward and downward frequency sweeps was reduced to 1.869×105 and 1.9411×105, respectively, with a spatial resolution of 0.0203m. These results underscore the effectiveness of the proposed approach in advancing intelligent calibration methodologies for LiDAR systems and highlight its potential for broad application in high-precision measurement domains. Full article
(This article belongs to the Special Issue Advancements in Optical Measurement Techniques and Applications)
Show Figures

Figure 1

23 pages, 3752 KB  
Article
Leveraging Immersive Technologies for Safety Evaluation in Forklift Operations
by Patryk Żuchowicz and Konrad Lewczuk
Appl. Sci. 2025, 15(20), 11048; https://doi.org/10.3390/app152011048 - 15 Oct 2025
Viewed by 200
Abstract
This article presents a novel methodology for evaluating the safety of forklift operations in intralogistics systems using a multi-user simulation model integrated with virtual reality (MUSM-VR). Set against the backdrop of persistent safety challenges in warehouse environments, particularly for inexperienced operators, the study [...] Read more.
This article presents a novel methodology for evaluating the safety of forklift operations in intralogistics systems using a multi-user simulation model integrated with virtual reality (MUSM-VR). Set against the backdrop of persistent safety challenges in warehouse environments, particularly for inexperienced operators, the study addresses the need for proactive safety assessment tools. The authors develop a simulation framework within the FlexSim 24.2 environment, enhanced by proprietary VR and server integration libraries, enabling interactive, immersive testing of warehouse layouts and operational scenarios. Through literature review and analysis of risk factors, the methodology incorporates human, infrastructural, organizational, and technical dimensions of forklift safety. A case study involving inexperienced participants demonstrates the model’s capability to identify high-risk areas, assess operator behavior, and evaluate the impact of visibility and speed parameters on collision risk. Results highlight the effectiveness of MUSM-VR in pinpointing hazardous intersections and inform design recommendations such as optimal speed limits and layout modifications. The study concludes that MUSM-VR not only facilitates early-stage safety analysis but also supports ergonomic design, operator training, and iterative testing of preventive measures, aligning with Industry 4.0 and 5.0 paradigms. The integration of immersive simulation into design and safety workflows marks a significant advancement in intralogistics system development. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

20 pages, 2320 KB  
Article
Signal Detection Method for OTFS System Based on Feature Fusion and CNN
by You Wu, Mengyao Zhou, Yuanjin Lin and Zixing Liao
Electronics 2025, 14(20), 4041; https://doi.org/10.3390/electronics14204041 - 14 Oct 2025
Viewed by 159
Abstract
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the [...] Read more.
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the message passing (MP) algorithm exhibits low efficiency in iterative signal updates. This paper proposes a signal detection method for an OTFS system based on feature fusion and a CNN (MP-WCNN), which employs wavelet decomposition to extract multi-scale signal features, combining MP enhancement for feature fusion and constructing high-dimensional feature tensors through channel-wise concatenation as CNN input to achieve signal detection. Experimental results demonstrate that the proposed MP-WCNN method achieves approximately 9 dB signal-to-noise ratio (SNR) gain compared to the MP algorithm at the same bit error rate (BER). Furthermore, the proposed method operates without requiring pilot assistance for CSI acquisition. Full article
Show Figures

Figure 1

30 pages, 7599 KB  
Article
Strategic Launch Pad Positioning: Optimizing Drone Path Planning Through Genetic Algorithms
by Gregory Gasteratos and Ioannis Karydis
Information 2025, 16(10), 897; https://doi.org/10.3390/info16100897 - 14 Oct 2025
Viewed by 140
Abstract
Multi-drone operations face significant efficiency challenges when launch pad locations are predetermined without optimization, leading to suboptimal route configurations and increased travel distances. This research addresses launch pad positioning as a continuous planar location-routing problem (PLRP), developing a genetic algorithm framework integrated with [...] Read more.
Multi-drone operations face significant efficiency challenges when launch pad locations are predetermined without optimization, leading to suboptimal route configurations and increased travel distances. This research addresses launch pad positioning as a continuous planar location-routing problem (PLRP), developing a genetic algorithm framework integrated with multiple Traveling Salesman Problem (mTSP) solvers to optimize launch pad coordinates within operational areas. The methodology was evaluated through extensive experimentation involving over 17 million test executions across varying problem complexities and compared against brute-force optimization, Particle Swarm Optimization (PSO), and simulated annealing (SA) approaches. The results demonstrate that the genetic algorithm achieves 97–100% solution accuracy relative to exhaustive search methods while reducing computational requirements by four orders of magnitude, requiring an average of 527 iterations compared to 30,000 for PSO and 1000 for SA. Smart initialization strategies and adaptive termination criteria provide additional performance enhancements, reducing computational effort by 94% while maintaining 98.8% solution quality. Statistical validation confirms systematic improvements across all tested scenarios. This research establishes a validated methodological framework for continuous launch pad optimization in UAV operations, providing practical insights for real-world applications where both solution quality and computational efficiency are critical operational factors while acknowledging the simplified energy model limitations that warrant future research into more complex operational dynamics. Full article
Show Figures

Figure 1

Back to TopTop