Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (401)

Search Parameters:
Keywords = entropy convergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1907 KB  
Article
Distinctive Human Dynamics of Semantic Uncertainty: Contextual Bias Accelerates Lexical Disambiguation
by Yang Lei, Linyan Liu, Jie Chen, Chan Tang, Siyi Fan, Yongqiang Cai and Guosheng Ding
Behav. Sci. 2025, 15(9), 1159; https://doi.org/10.3390/bs15091159 - 26 Aug 2025
Viewed by 190
Abstract
This study investigated the dynamic resolution of lexical–semantic ambiguity during sentence comprehension, focusing on how uncertainty evolves as contextual information accumulates. Using time-resolved eye-tracking and a novel entropy-based measure derived from group-level semantic choice distributions, we quantified semantic uncertainty at a fine-grained temporal [...] Read more.
This study investigated the dynamic resolution of lexical–semantic ambiguity during sentence comprehension, focusing on how uncertainty evolves as contextual information accumulates. Using time-resolved eye-tracking and a novel entropy-based measure derived from group-level semantic choice distributions, we quantified semantic uncertainty at a fine-grained temporal resolution for ambiguous words. By parametrically manipulating the semantic bias strength of the sentence context, we examined how context guides disambiguation over time. The results showed that semantic uncertainty declined gradually over temporal segments and dropped sharply following the onset of ambiguous words, reflecting both incremental integration and syntactic anchoring. A stronger contextual bias led to faster reductions in uncertainty, with effects following a near-linear trend. These findings support dynamic semantic processing models that assume continuous, context-sensitive convergence toward intended meanings. In contrast, a pretrained Chinese BERT model (RoBERTa-wwm-ext) showed similar overall trends in uncertainty reduction but lacked sensitivity to contextual bias. This discrepancy suggests that, while language models can approximate human-level disambiguation broadly, they fail to capture fine-grained semantic modulation driven by context. These findings provide a novel empirical characterization of disambiguation dynamics and offer a new methodological approach to capturing real-time semantic uncertainty. The observed divergence between human and model performance may inform future improvements to language models and contributes to our understanding of possible architectural differences between human and artificial semantic systems. Full article
Show Figures

Figure 1

24 pages, 8688 KB  
Article
Lightweight Obstacle Avoidance for Fixed-Wing UAVs Using Entropy-Aware PPO
by Meimei Su, Haochen Chai, Chunhui Zhao, Yang Lyu and Jinwen Hu
Drones 2025, 9(9), 598; https://doi.org/10.3390/drones9090598 - 26 Aug 2025
Viewed by 432
Abstract
Obstacle avoidance during high-speed, low-altitude flight remains a significant challenge for unmanned aerial vehicles (UAVs), particularly in unfamiliar environments where prior maps and heavy onboard sensors are unavailable. To address this, we present an entropy-aware deep reinforcement learning framework that enables fixed-wing UAVs [...] Read more.
Obstacle avoidance during high-speed, low-altitude flight remains a significant challenge for unmanned aerial vehicles (UAVs), particularly in unfamiliar environments where prior maps and heavy onboard sensors are unavailable. To address this, we present an entropy-aware deep reinforcement learning framework that enables fixed-wing UAVs to navigate safely using only monocular onboard cameras. Our system features a lightweight, single-frame depth estimation module optimized for real-time execution on edge computing platforms, followed by a reinforcement learning controller equipped with a novel reward function that balances goal-reaching performance with path smoothness under fixed-wing dynamic constraints. To enhance policy optimization, we incorporate high-quality experiences from the replay buffer into the gradient computation, introducing a soft imitation mechanism that encourages the agent to align its behavior with previously successful actions. To further balance exploration and exploitation, we integrate an adaptive entropy regularization mechanism into the Proximal Policy Optimization (PPO) algorithm. This module dynamically adjusts policy entropy during training, leading to improved stability, faster convergence, and better generalization to unseen scenarios. Extensive software-in-the-loop (SITL) and hardware-in-the-loop (HITL) experiments demonstrate that our approach outperforms baseline methods in obstacle avoidance success rate and path quality, while remaining lightweight and deployable on resource-constrained aerial platforms. Full article
Show Figures

Figure 1

18 pages, 1460 KB  
Article
Sustainable Optimization Design of Architectural Space Based on Visual Perception and Multi-Objective Decision Making
by Qunjing Ji, Yu Cai and Osama Sohaib
Buildings 2025, 15(16), 2940; https://doi.org/10.3390/buildings15162940 - 19 Aug 2025
Viewed by 231
Abstract
This study proposes an integrated computational framework that combines deep learning-based visual perception analysis with multi-criteria decision making to optimize indoor architectural layouts in terms of both visual coherence and sustainability. The framework initially employs a deep learning method leveraging edge pixel feature [...] Read more.
This study proposes an integrated computational framework that combines deep learning-based visual perception analysis with multi-criteria decision making to optimize indoor architectural layouts in terms of both visual coherence and sustainability. The framework initially employs a deep learning method leveraging edge pixel feature recombination to extract critical spatial layout features and determine key visual focal points. A fusion model is then constructed to preprocess visual representations of interior layouts. Subsequently, an evolutionary deep learning algorithm is adopted to optimize parameter convergence and enhance feature extraction accuracy. To support comprehensive evaluation and decision making, an improved Analytic Hierarchy Process (AHP) is integrated with the entropy weight method, enabling the fusion of objective, data-driven weights with subjective expert judgments. This dual-focus framework addresses two pressing challenges in architectural optimization: sensitivity to building-specific spatial features and the traditional disconnect between perceptual analysis and sustainability metrics. Experimental results on a dataset of 25,400 building images demonstrate that the proposed method achieves a feature detection accuracy of 92.3%, surpassing CNN (73.6%), RNN (68.2%), and LSTM (75.1%) baselines, while reducing the processing time to under 0.95 s and lowering the carbon footprint to 17.8% of conventional methods. These findings underscore the effectiveness and practicality of the proposed model in facilitating intelligent, sustainable architectural design. Full article
Show Figures

Figure 1

31 pages, 18843 KB  
Article
Liquid Adaptive AI: A Theoretical Framework for Continuously Self-Improving Artificial Intelligence
by Thomas R. Caulfield, Naeyma N. Islam and Rohit Chitale
AI 2025, 6(8), 186; https://doi.org/10.3390/ai6080186 - 14 Aug 2025
Viewed by 800
Abstract
We present Liquid Adaptive AI as a theoretical framework and mathematical basis for artificial intelligence systems capable of continuous structural adaptation and autonomous capability development. This work explores the conceptual boundaries of adaptive AI by formalizing three interconnected mechanisms: (1) entropy-guided hyperdimensional knowledge [...] Read more.
We present Liquid Adaptive AI as a theoretical framework and mathematical basis for artificial intelligence systems capable of continuous structural adaptation and autonomous capability development. This work explores the conceptual boundaries of adaptive AI by formalizing three interconnected mechanisms: (1) entropy-guided hyperdimensional knowledge graphs that could autonomously restructure based on information-theoretic criteria; (2) a self-development engine using hierarchical Bayesian optimization for runtime architecture modification; and (3) a federated multi-agent framework with emergent specialization through distributed reinforcement learning. We address fundamental limitations in current AI systems through mathematically formalized processes of dynamic parameter adjustment, structural self-modification, and cross-domain knowledge synthesis, while immediate implementation faces substantial computational challenges requiring infrastructure on the scale of current large language model training facilities, we provide architectural specifications, theoretical convergence bounds, and evaluation criteria as a foundation for future research. This theoretical exploration establishes mathematical foundations for a potential new paradigm in artificial intelligence that would transition from episodic training to persistent autonomous development, offering a long-term research direction for the field. A comprehensive Supplementary Materials document provides detailed technical analysis, computational requirements, and an incremental development roadmap spanning approximately a decade. Full article
Show Figures

Figure 1

21 pages, 2569 KB  
Article
Deep Learning and COVID-19: Two Pathways to Scientific Evolution
by Huquan Kang, Hanyan Dong, Yuang Ding, Zhouyang Jin, Luoyi Fu, Jiaxin Ding, Xinbing Wang, Lei Zhou and Chenghu Zhou
Appl. Sci. 2025, 15(16), 8912; https://doi.org/10.3390/app15168912 - 13 Aug 2025
Viewed by 304
Abstract
COVID-19 and deep learning have each marked pivotal milestones in the evolution of modern science. Since the onset of the pandemic, researchers from diverse disciplines have converged to address urgent, real-world challenges, while deep learning has catalyzed methodological innovation across fields. These two [...] Read more.
COVID-19 and deep learning have each marked pivotal milestones in the evolution of modern science. Since the onset of the pandemic, researchers from diverse disciplines have converged to address urgent, real-world challenges, while deep learning has catalyzed methodological innovation across fields. These two phenomena exemplify distinct scientific paradigms: spread-out science, which propagates novel ideas and methods, and merge-in science, which synthesizes existing knowledge to solve complex problems. We introduce the concept of sci-entropy, defined as the difference between the semantic entropy of a paper’s citations and references. Positive sci-entropy reflects the diffusion of new ideas (spread-out), whereas negative values indicate knowledge consolidation (merge-in). Our analysis, spanning deep learning, COVID-19, and 19 additional disciplines, reveals that scientific progress is governed by the dynamic interplay between these two forces. Excessively high sci-entropy may fragment research, while overly low values can stifle innovation. Our findings suggest that the balance between innovation and synthesis is fundamental to the trajectory of scientific development, offering a new framework for understanding interdisciplinary research and knowledge integration. Full article
Show Figures

Figure 1

20 pages, 1876 KB  
Article
Efficient AES Side-Channel Attacks Based on Residual Mamba Enhanced CNN
by Zhaobin Li, Chenchong Du and Xiaoyi Duan
Entropy 2025, 27(8), 853; https://doi.org/10.3390/e27080853 - 11 Aug 2025
Viewed by 565
Abstract
With the continuous advancement of side-channel attacks (SCA), deep learning-based methods have emerged as a prominent research focus due to their powerful feature extraction and nonlinear modeling capabilities. Traditional convolutional neural networks (CNNs) excel at capturing local temporal dependencies but struggle to model [...] Read more.
With the continuous advancement of side-channel attacks (SCA), deep learning-based methods have emerged as a prominent research focus due to their powerful feature extraction and nonlinear modeling capabilities. Traditional convolutional neural networks (CNNs) excel at capturing local temporal dependencies but struggle to model long-range sequential information effectively, limiting attack efficiency and generalization. In this paper, we propose a hybrid deep neural network architecture that integrates Residual Mamba blocks with multi-layer perceptrons (MLP) to enhance the modeling of side-channel information from AES implementations. The Residual Mamba module leverages state-space modeling to capture long-range dependencies, improving the model’s global temporal perception, while the MLP module further fuses high-dimensional features. Experiments conducted on the publicly available ASCAD dataset targeting the second byte of AES demonstrate that our model achieves guessing entropy (GE) rank 1 with fewer than 100 attack traces, significantly outperforming traditional CNNs and recent Transformer-based models. The proposed approach exhibits fast convergence and high attack efficiency, offering an effective new paradigm for deep learning in side-channel analysis with important theoretical and practical implications. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 398 KB  
Article
Analyzing Regional Disparities in China’s Green Manufacturing Transition
by Xuejuan Wang, Qi Deng, Riccardo Natoli, Li Wang, Wei Zhang and Catherine Xiaocui Lou
Sustainability 2025, 17(15), 7127; https://doi.org/10.3390/su17157127 - 6 Aug 2025
Viewed by 445
Abstract
China has identified the high-quality development of its green manufacturing transition as the top priority for upgrading their industrial structure system which will lead to the sustainable development of an innovation ecosystem. To assess their progress in this area, this study selects the [...] Read more.
China has identified the high-quality development of its green manufacturing transition as the top priority for upgrading their industrial structure system which will lead to the sustainable development of an innovation ecosystem. To assess their progress in this area, this study selects the panel data of 31 provinces in China from 2011 to 2021 and constructs an evaluation index system for the green transformation of the manufacturing industry from four dimensions: environment, resources, economy, and industrial structure. This not only comprehensively and systematically reflects the dynamic changes in the green transformation of the manufacturing industry but also addresses the limitations of currently used indices. The entropy value method is used to calculate the comprehensive score of the green transformation of the manufacturing industry, while the key factors influencing the convergence of the green transformation of the manufacturing industry are further explored. The results show that first, the overall level of the green transformation of the manufacturing industry has significantly improved as evidenced by an approximate 32% increase. Second, regional differences are significant with the eastern region experiencing significantly higher levels of transformation compared to the central and western regions, along with a decreasing trend from the east to the central and western regions. From a policy perspective, the findings suggest that tailored production methods for each region should be adopted with a greater emphasis on knowledge exchanges to promote green transition in less developed regions. In addition, further regulations are required which, in part, focus on increasing the degree of openness to the outside world to promote the level of green manufacturing transition. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

30 pages, 15717 KB  
Article
Channel Amplitude and Phase Error Estimation of Fully Polarimetric Airborne SAR with 0.1 m Resolution
by Jianmin Hu, Yanfei Wang, Jinting Xie, Guangyou Fang, Huanjun Chen, Yan Shen, Zhenyu Yang and Xinwen Zhang
Remote Sens. 2025, 17(15), 2699; https://doi.org/10.3390/rs17152699 - 4 Aug 2025
Viewed by 391
Abstract
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution [...] Read more.
In order to achieve 0.1 m resolution and fully polarimetric observation capabilities for airborne SAR systems, the adoption of stepped-frequency modulation waveform combined with the polarization time-division transmit/receive (T/R) technique proves to be an effective technical approach. Considering the issue of range resolution degradation and paired echoes caused by multichannel amplitude–phase mismatch in fully polarimetric airborne SAR with 0.1 m resolution, an amplitude–phase error estimation algorithm based on echo data is proposed in this paper. Firstly, the subband amplitude spectrum correction curve is obtained by the statistical average of the subband amplitude spectrum. Secondly, the paired-echo broadening function is obtained by selecting high-quality sample points after single-band imaging and the nonlinear phase error within the subbands is estimated via Sinusoidal Frequency Modulation Fourier Transform (SMFT). Thirdly, based on the minimum entropy criterion of the synthesized compressed pulse image, residual linear phase errors between subbands are quickly acquired. Finally, two-dimensional cross-correlation of the image slice is utilized to estimate the positional deviation between polarization channels. This method only requires high-quality data samples from the echo data, then rapidly estimates both intra-band and inter-band amplitude/phase errors by using SMFT and the minimum entropy criterion, respectively, with the characteristics of low computational complexity and fast convergence speed. The effectiveness of this method is verified by the imaging results of the experimental data. Full article
Show Figures

Graphical abstract

19 pages, 2237 KB  
Article
Flood Season Division Model Based on Goose Optimization Algorithm–Minimum Deviation Combination Weighting
by Yukai Wang, Jun Li and Jing Fu
Sustainability 2025, 17(15), 6968; https://doi.org/10.3390/su17156968 - 31 Jul 2025
Viewed by 368
Abstract
The division of the flood season is of great significance for the precise operation of water conservancy projects, flood control and disaster reduction, and the rational allocation of water resources, alleviating the contradiction of the uneven spatial and temporal distribution of water resources. [...] Read more.
The division of the flood season is of great significance for the precise operation of water conservancy projects, flood control and disaster reduction, and the rational allocation of water resources, alleviating the contradiction of the uneven spatial and temporal distribution of water resources. The single weighting method can only determine the weight of the flood season division indicators from a certain perspective and cannot comprehensively reflect the time-series attributes of the indicators. This study proposes a Flood Season Division Model based on the Goose Optimization Algorithm and Minimum Deviation Combined Weighting (FSDGOAMDCW). The model uses the Goose Optimization Algorithm (GOA) to solve the Minimum Deviation Combination model, integrating weights from two subjective methods (Expert Scoring and G1) and three objective methods (Entropy Weight, CV, and CRITIC). Combined with the Set Pair Analysis Method (SPAM), it realizes comprehensive flood season division. Based on daily precipitation data of the Nandujiang River (1961–2022), the study determines its flood season from 1 May to 30 October. Comparisons show that: ① GOA converges faster than the Genetic Algorithm, stabilizing at T = 5 and achieving full convergence at T = 24; and ② The model’s division results have the smallest Intra-Class Differences, avoiding indistinguishability between flood and non-flood seasons under special conditions. This research aims to support flood season division studies in tropical islands. Full article
Show Figures

Figure 1

25 pages, 654 KB  
Article
Entropy-Regularized Federated Optimization for Non-IID Data
by Koffka Khan
Algorithms 2025, 18(8), 455; https://doi.org/10.3390/a18080455 - 22 Jul 2025
Viewed by 396
Abstract
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter [...] Read more.
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter update distribution. ERFO requires no additional communication, adds a single-scalar hyperparameter λ, and integrates seamlessly into any FedAvg-style training loop. We derive a closed-form gradient for the entropy regularizer and provide convergence guarantees: under μ-strong convexity and L-smoothness, ERFO achieves the same O(1/T) (or linear) rates as FedAvg (with only O(λ) bias for fixed λ and exact convergence when λt0); in the non-convex case, we prove stationary-point convergence at O(1/T). Empirically, on five-client non-IID splits of the UNSW-NB15 intrusion-detection dataset, ERFO yields a +1.6 pp gain in accuracy and +0.008 in macro-F1 over FedAvg with markedly smoother dynamics. On a three-of-five split of PneumoniaMNIST, a fixed λ matches or exceeds FedAvg, FedProx, and SCAFFOLD—achieving 90.3% accuracy and 0.878 macro-F1—while preserving rapid, stable learning. ERFO’s gradient-only design is model-agnostic, making it broadly applicable across tasks. Full article
(This article belongs to the Special Issue Advances in Parallel and Distributed AI Computing)
Show Figures

Figure 1

29 pages, 3288 KB  
Article
Non-Vertical Well Trajectory Design Based on Multi-Objective Optimization
by Xiaowei Li, Yu Li, Yang Wu, Zhaokai Hou and Haipeng Gu
Appl. Sci. 2025, 15(14), 7862; https://doi.org/10.3390/app15147862 - 14 Jul 2025
Viewed by 211
Abstract
The optimization and control of the wellbore trajectory is one of the important technologies to improve drilling efficiency, reduce drilling cost, and ensure drilling safety in the process of modern oil and gas exploration and development. In this paper, a multi-objective wellbore trajectory [...] Read more.
The optimization and control of the wellbore trajectory is one of the important technologies to improve drilling efficiency, reduce drilling cost, and ensure drilling safety in the process of modern oil and gas exploration and development. In this paper, a multi-objective wellbore trajectory optimization mathematical model is established, which takes into account the five factors of wellbore trajectory length, friction, torque, trajectory complexity, and target accuracy. A DR-NSGA-III-MGA algorithm (dynamic reference NSGA-III with multi-granularity adaptation) is proposed. By introducing multi-granularity reference vector generation and an information entropy-guided search direction adaptation mechanism, the performance of the algorithm in the complex target space is improved, and the three-stage wellbore trajectory is optimized. Simulation experiments show that the DR-NSGA-III-MGA algorithm is stable in a variety of complex problems, while maintaining good convergence, and has good generalization ability and practical application value. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

17 pages, 3854 KB  
Article
Research on Signal Processing Algorithms Based on Wearable Laser Doppler Devices
by Yonglong Zhu, Yinpeng Fang, Jinjiang Cui, Jiangen Xu, Minghang Lv, Tongqing Tang, Jinlong Ma and Chengyao Cai
Electronics 2025, 14(14), 2761; https://doi.org/10.3390/electronics14142761 - 9 Jul 2025
Viewed by 310
Abstract
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise [...] Read more.
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise information, modal decomposition techniques that depend on empirical parameter optimization and are prone to modal aliasing, wavelet threshold functions that struggle to balance signal preservation with smoothness, and the high computational complexity of deep learning approaches—this paper proposes an ISSA-VMD-AWPTD denoising algorithm. This innovative approach integrates an improved sparrow search algorithm (ISSA), variational mode decomposition (VMD), and adaptive wavelet packet threshold denoising (AWPTD). The ISSA is enhanced through cubic chaotic mapping, butterfly optimization, and sine–cosine search strategies, targeting the minimization of the envelope entropy of modal components for adaptive optimization of VMD’s decomposition levels and penalty factors. A correlation coefficient-based selection mechanism is employed to separate target and mixed modes effectively, allowing for the efficient removal of noise components. Additionally, an exponential adaptive threshold function is introduced, combining wavelet packet node energy proportion analysis to achieve efficient signal reconstruction. By leveraging the rapid convergence property of ISSA (completing parameter optimization within five iterations), the computational load of traditional VMD is reduced while maintaining the denoising accuracy. Experimental results demonstrate that for a 200 Hz test signal, the proposed algorithm achieves a signal-to-noise ratio (SNR) of 24.47 dB, an improvement of 18.8% over the VMD method (20.63 dB), and a root-mean-square-error (RMSE) of 0.0023, a reduction of 69.3% compared to the VMD method (0.0075). The processing results for measured human blood flow signals achieve an SNR of 24.11 dB, a RMSE of 0.0023, and a correlation coefficient (R) of 0.92, all outperforming other algorithms, such as VMD and WPTD. This study effectively addresses issues related to parameter sensitivity and incomplete noise separation in traditional methods, providing a high-precision and low-complexity real-time signal processing solution for wearable devices. However, the parameter optimization still needs improvement when dealing with large datasets. Full article
Show Figures

Figure 1

18 pages, 70320 KB  
Article
RIS-UNet: A Multi-Level Hierarchical Framework for Liver Tumor Segmentation in CT Images
by Yuchai Wan, Lili Zhang and Murong Wang
Entropy 2025, 27(7), 735; https://doi.org/10.3390/e27070735 - 9 Jul 2025
Viewed by 525
Abstract
The deep learning-based analysis of liver CT images is expected to provide assistance for clinicians in the diagnostic decision-making process. However, the accuracy of existing methods still falls short of clinical requirements and needs to be further improved. Therefore, in this work, we [...] Read more.
The deep learning-based analysis of liver CT images is expected to provide assistance for clinicians in the diagnostic decision-making process. However, the accuracy of existing methods still falls short of clinical requirements and needs to be further improved. Therefore, in this work, we propose a novel multi-level hierarchical framework for liver tumor segmentation. In the first level, we integrate inter-slice spatial information by a 2.5D network to resolve the accuracy–efficiency trade-off inherent in conventional 2D/3D segmentation strategies for liver tumor segmentation. Then, the second level extracts the inner-slice global and local features for enhancing feature representation. We propose the Res-Inception-SE Block, which combines residual connections, multi-scale Inception modules, and squeeze-excitation attention to capture comprehensive global and local features. Furthermore, we design a hybrid loss function combining Binary Cross Entropy (BCE) and Dice loss to solve the category imbalance problem and accelerate convergence. Extensive experiments on the LiTS17 dataset demonstrate the effectiveness of our method on accuracy, efficiency, and visual results for liver tumor segmentation. Full article
(This article belongs to the Special Issue Cutting-Edge AI in Computational Bioinformatics)
Show Figures

Figure 1

23 pages, 1474 KB  
Article
Cumulative Prospect Theory-Driven Pigeon-Inspired Optimization for UAV Swarm Dynamic Decision-Making
by Yalan Peng and Mengzhen Huo
Drones 2025, 9(7), 478; https://doi.org/10.3390/drones9070478 - 6 Jul 2025
Viewed by 520
Abstract
To address the dynamic decision-making and control problem in unmanned aerial vehicle (UAV) swarms, this paper proposes a cumulative prospect theory-driven pigeon-inspired optimization (CPT-PIO) algorithm. Gray relational analysis and information entropy theory are integrated into cumulative prospect theory (CPT), constructing a prospect value [...] Read more.
To address the dynamic decision-making and control problem in unmanned aerial vehicle (UAV) swarms, this paper proposes a cumulative prospect theory-driven pigeon-inspired optimization (CPT-PIO) algorithm. Gray relational analysis and information entropy theory are integrated into cumulative prospect theory (CPT), constructing a prospect value model for Pareto solutions by setting reference points, defining value functions, and determining attribute weights. This prospect value is used to evaluate the quality of each Pareto solution and serves as the fitness function in the pigeon-inspired optimization (PIO) algorithm to guide its evolutionary process. Furthermore, incorporating individual and swarm situation assessment methods, the situation assessment model is constructed and the information entropy theory is employed to ascertain the weight of each assessment index. Finally, the reverse search mechanism and competitive learning mechanism are introduced into the standard PIO to prevent premature convergence and enhance the population’s exploration capability. Simulation results demonstrate that the proposed CPT-PIO algorithm significantly outperforms two novel multi-objective optimization algorithms in terms of search performance and solution quality, yielding higher-quality Pareto solutions for dynamic UAV swarm decision-making. Full article
(This article belongs to the Special Issue Biological UAV Swarm Control)
Show Figures

Figure 1

36 pages, 2046 KB  
Article
A Hybrid Multi-Strategy Optimization Metaheuristic Algorithm for Multi-Level Thresholding Color Image Segmentation
by Amir Seyyedabbasi
Appl. Sci. 2025, 15(13), 7255; https://doi.org/10.3390/app15137255 - 27 Jun 2025
Viewed by 393
Abstract
Hybrid metaheuristic algorithms have been widely used to solve global optimization problems, making the concept of hybridization increasingly important. This study proposes a new hybrid multi-strategy metaheuristic algorithm named COSGO, which combines the strengths of grey wolf optimization (GWO) and Sand Cat Swarm [...] Read more.
Hybrid metaheuristic algorithms have been widely used to solve global optimization problems, making the concept of hybridization increasingly important. This study proposes a new hybrid multi-strategy metaheuristic algorithm named COSGO, which combines the strengths of grey wolf optimization (GWO) and Sand Cat Swarm Optimization (SCSO) to effectively address global optimization tasks. Additionally, a chaotic opposition-based learning strategy is incorporated to enhance the efficiency and global search capability of the algorithm. One of the main challenges in metaheuristic algorithms is premature convergence or getting trapped in local optima. To overcome this, the proposed strategy is designed to improve exploration and help the algorithm escape local minima. As a real-world application, multi-level thresholding for color image segmentation—a well-known problem in image processing—is studied. The COSGO algorithm is applied using two objective functions, Otsu’s method and Kapur’s entropy, to determine optimal multi-level thresholds. Experiments are conducted on 10 images from the widely used BSD500 dataset. The results show that the COSGO algorithm achieves competitive performance compared to other State-of-the-Art algorithms. To further evaluate its effectiveness, the CEC2017 benchmark functions are employed, and a Friedman ranking test is used to statistically analyze the results. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

Back to TopTop