Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (658)

Search Parameters:
Keywords = general natural metric

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 4444 KiB  
Article
Unveiling the Potential of Novel Ternary Chalcogenide SrHfSe3 for Eco-Friendly, Self-Powered, Near-Infrared Photodetectors: A SCAPS-1D Simulation Study
by Salah Abdo, Ambali Alade Odebowale, Amer Abdulghani, Khalil As’ham, Sanjida Akter, Haroldo Hattori, Nicholas Kanizaj and Andrey E. Miroshnichenko
Sci 2025, 7(3), 113; https://doi.org/10.3390/sci7030113 - 6 Aug 2025
Abstract
Ternary chalcogenide-based sulfide materials with distorted morphologies such as BaZrS3, CaZrS3, and SrZrS3, have recently gained much attention in optoelectronics and photovoltaics due to their high structural and thermal stability and compatibility with low-cost, earth-abundant synthesis routes. [...] Read more.
Ternary chalcogenide-based sulfide materials with distorted morphologies such as BaZrS3, CaZrS3, and SrZrS3, have recently gained much attention in optoelectronics and photovoltaics due to their high structural and thermal stability and compatibility with low-cost, earth-abundant synthesis routes. However, their relatively large bandgaps often limit their suitability for near-infrared (NIR) photodetectors. Here, we conducted a comprehensive investigation of SrHfSe3, a ternary chalcogenide with an orthorhombic crystal structure and distinctive needle-like morphology, as a promising candidate for NIR photodetection. SrHfSe3 exhibits a direct bandgap of 1.02 eV, placing it well within the NIR range. Its robust structure, high temperature stability, phase stability and natural abundance make it a compelling material for next-generation, self-powered NIR photodetectors. An in-depth analysis of the SrHfSe3-based photodetector was performed using SCAPS-1D simulations, focusing on key performance metrics such as J–V behavior, photoresponsivity, and specific detectivity. Device optimization was achieved by thoroughly altering each layer thickness, doping concentrations, and defect densities. Additionally, the influence of interface defects, absorber bandgap, and operating temperature was assessed to enhance the photoresponse. Under optimal conditions, the device achieved a short-circuit current density (Jsc) of 45.88 mA/cm2, an open-circuit voltage (Voc) of 0.7152 V, a peak photoresponsivity of 0.85 AW−1, and a detectivity of 2.26 × 1014 Jones at 1100 nm. A broad spectral response spanning 700–1200 nm confirms its efficacy in the NIR region. These results position SrHfSe3 as a strong contender for future NIR photodetectors and provide a foundation for experimental validation in advanced optoelectronic applications. Full article
Show Figures

Figure 1

50 pages, 10020 KiB  
Article
A Bio-Inspired Adaptive Probability IVYPSO Algorithm with Adaptive Strategy for Backpropagation Neural Network Optimization in Predicting High-Performance Concrete Strength
by Kaifan Zhang, Xiangyu Li, Songsong Zhang and Shuo Zhang
Biomimetics 2025, 10(8), 515; https://doi.org/10.3390/biomimetics10080515 - 6 Aug 2025
Abstract
Accurately predicting the compressive strength of high-performance concrete (HPC) is critical for ensuring structural integrity and promoting sustainable construction practices. However, HPC exhibits highly complex, nonlinear, and multi-factorial interactions among its constituents (such as cement, aggregates, admixtures, and curing conditions), which pose significant [...] Read more.
Accurately predicting the compressive strength of high-performance concrete (HPC) is critical for ensuring structural integrity and promoting sustainable construction practices. However, HPC exhibits highly complex, nonlinear, and multi-factorial interactions among its constituents (such as cement, aggregates, admixtures, and curing conditions), which pose significant challenges to conventional predictive models. Traditional approaches often fail to adequately capture these intricate relationships, resulting in limited prediction accuracy and poor generalization. Moreover, the high dimensionality and noisy nature of HPC mix data increase the risk of model overfitting and convergence to local optima during optimization. To address these challenges, this study proposes a novel bio-inspired hybrid optimization model, AP-IVYPSO-BP, which is specifically designed to handle the nonlinear and complex nature of HPC strength prediction. The model integrates the ivy algorithm (IVYA) with particle swarm optimization (PSO) and incorporates an adaptive probability strategy based on fitness improvement to dynamically balance global exploration and local exploitation. This design effectively mitigates common issues such as premature convergence, slow convergence speed, and weak robustness in traditional metaheuristic algorithms when applied to complex engineering data. The AP-IVYPSO is employed to optimize the weights and biases of a backpropagation neural network (BPNN), thereby enhancing its predictive accuracy and robustness. The model was trained and validated on a dataset comprising 1030 HPC mix samples. Experimental results show that AP-IVYPSO-BP significantly outperforms traditional BPNN, PSO-BP, GA-BP, and IVY-BP models across multiple evaluation metrics. Specifically, it achieved an R2 of 0.9542, MAE of 3.0404, and RMSE of 3.7991 on the test set, demonstrating its high accuracy and reliability. These results confirm the potential of the proposed bio-inspired model in the prediction and optimization of concrete strength, offering practical value in civil engineering and materials design. Full article
Show Figures

Figure 1

29 pages, 3266 KiB  
Article
Wavelet Multiresolution Analysis-Based Takagi–Sugeno–Kang Model, with a Projection Step and Surrogate Feature Selection for Spectral Wave Height Prediction
by Panagiotis Korkidis and Anastasios Dounis
Mathematics 2025, 13(15), 2517; https://doi.org/10.3390/math13152517 - 5 Aug 2025
Abstract
The accurate prediction of significant wave height presents a complex yet vital challenge in the fields of ocean engineering. This capability is essential for disaster prevention, fostering sustainable development and deepening our understanding of various scientific phenomena. We explore the development of a [...] Read more.
The accurate prediction of significant wave height presents a complex yet vital challenge in the fields of ocean engineering. This capability is essential for disaster prevention, fostering sustainable development and deepening our understanding of various scientific phenomena. We explore the development of a comprehensive predictive methodology for wave height prediction by integrating novel Takagi–Sugeno–Kang fuzzy models within a multiresolution analysis framework. The multiresolution analysis emerges via wavelets, since they are prominent models characterised by their inherent multiresolution nature. The maximal overlap discrete wavelet transform is utilised to generate the detail and resolution components of the time series, resulting from this multiresolution analysis. The novelty of the proposed model lies on its hybrid training approach, which combines least squares with AdaBound, a gradient-based algorithm derived from the deep learning literature. Significant wave height prediction is studied as a time series problem, hence, the appropriate inputs to the model are selected by developing a surrogate-based wrapped algorithm. The developed wrapper-based algorithm, employs Bayesian optimisation to deliver a fast and accurate method for feature selection. In addition, we introduce a projection step, to further refine the approximation capabilities of the resulting predictive system. The proposed methodology is applied to a real-world time series pertaining to spectral wave height and obtained from the Poseidon operational oceanography system at the Institute of Oceanography, part of the Hellenic Center for Marine Research. Numerical studies showcase a high degree of approximation performance. The predictive scheme with the projection step yields a coefficient of determination of 0.9991, indicating a high level of accuracy. Furthermore, it outperforms the second-best comparative model by approximately 49% in terms of root mean squared error. Comparative evaluations against powerful artificial intelligence models, using regression metrics and hypothesis test, underscore the effectiveness of the proposed methodology. Full article
(This article belongs to the Special Issue Applications of Mathematics in Neural Networks and Machine Learning)
Show Figures

Figure 1

21 pages, 6628 KiB  
Article
MCA-GAN: A Multi-Scale Contextual Attention GAN for Satellite Remote-Sensing Image Dehazing
by Sufen Zhang, Yongcheng Zhang, Zhaofeng Yu, Shaohua Yang, Huifeng Kang and Jingman Xu
Electronics 2025, 14(15), 3099; https://doi.org/10.3390/electronics14153099 - 3 Aug 2025
Viewed by 165
Abstract
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle [...] Read more.
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle with remote-sensing images due to their complex imaging conditions and scale diversity. Given this, we propose a novel Multi-Scale Contextual Attention Generative Adversarial Network (MCA-GAN), specifically designed for satellite image dehazing. Our method integrates multi-scale feature extraction with global contextual guidance to enhance the network’s comprehension of complex remote-sensing scenes and its sensitivity to fine details. MCA-GAN incorporates two self-designed key modules: (1) a Multi-Scale Feature Aggregation Block, which employs multi-directional global pooling and multi-scale convolutional branches to bolster the model’s ability to capture land-cover details across varying spatial scales; (2) a Dynamic Contextual Attention Block, which uses a gated mechanism to fuse three-dimensional attention weights with contextual cues, thereby preserving global structural and chromatic consistency while retaining intricate local textures. Extensive qualitative and quantitative experiments on public benchmarks demonstrate that MCA-GAN outperforms other existing methods in both visual fidelity and objective metrics, offering a robust and practical solution for remote-sensing image dehazing. Full article
Show Figures

Figure 1

17 pages, 310 KiB  
Article
Statistical Entropy Based on the Generalized-Uncertainty-Principle-Induced Effective Metric
by Soon-Tae Hong, Yong-Wan Kim and Young-Jai Park
Universe 2025, 11(8), 256; https://doi.org/10.3390/universe11080256 - 2 Aug 2025
Viewed by 92
Abstract
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from [...] Read more.
We investigate the statistical entropy of black holes within the framework of the generalized uncertainty principle (GUP) by employing effective metrics that incorporate leading-order and all-order quantum gravitational corrections. We construct three distinct effective metrics induced by the GUP, which are derived from the GUP-corrected temperature, entropy, and all-order GUP corrections, and analyze their impact on black hole entropy using ’t Hooft’s brick wall method. Our results show that, despite the differences in the effective metrics and the corresponding ultraviolet cutoffs, the statistical entropy consistently satisfies the Bekenstein–Hawking area law when expressed in terms of an invariant (coordinate-independent) distance near the horizon. Furthermore, we demonstrate that the GUP naturally regularizes the ultraviolet divergence in the density of states, eliminating the need for artificial cutoffs and yielding finite entropy even when counting quantum states only in the vicinity of the event horizon. These findings highlight the universality and robustness of the area law under GUP modifications and provide new insights into the interplay between quantum gravity effects and black hole thermodynamics. Full article
(This article belongs to the Collection Open Questions in Black Hole Physics)
24 pages, 9086 KiB  
Article
Linking Optimization Success and Stability of Finite-Time Thermodynamics Heat Engines
by Julian Gonzalez-Ayala, David Pérez-Gallego, Alejandro Medina, José M. Mateos Roco, Antonio Calvo Hernández, Santiago Velasco and Fernando Angulo-Brown
Entropy 2025, 27(8), 822; https://doi.org/10.3390/e27080822 - 2 Aug 2025
Viewed by 141
Abstract
In celebration of 50 years of the endoreversible Carnot-like heat engine, this work aims to link the thermodynamic success of the irreversible Carnot-like heat engine with the stability dynamics of the engine. This region of success is defined by two extreme configurations in [...] Read more.
In celebration of 50 years of the endoreversible Carnot-like heat engine, this work aims to link the thermodynamic success of the irreversible Carnot-like heat engine with the stability dynamics of the engine. This region of success is defined by two extreme configurations in the interaction between heat reservoirs and the working fluid. The first corresponds to a fully reversible limit, and the second one is the fully dissipative limit; in between both limits, the heat exchange between reservoirs and working fluid produces irreversibilities and entropy generation. The distance between these two extremal configurations is minimized, independently of the chosen metric, in the state where the efficiency is half the Carnot efficiency. This boundary encloses the region where irreversibilities dominate or the reversible behavior dominates (region of success). A general stability dynamics is proposed based on the endoreversible nature of the model and the operation parameter in charge of defining the operation regime. For this purpose, the maximum ecological and maximum Omega regimes are considered. The results show that for single perturbations, the dynamics rapidly directs the system towards the success region, and under random perturbations producing stochastic trajectories, the system remains always in this region. The results are contrasted with the case in which no restitution dynamics exist. It is shown that stability allows the system to depart from the original steady state to other states that enhance the system’s performance, which could favor the evolution and specialization of systems in nature and in artificial devices. Full article
(This article belongs to the Special Issue The First Half Century of Finite-Time Thermodynamics)
Show Figures

Figure 1

18 pages, 7213 KiB  
Article
DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement
by Hui Zhou, Jun Li, Yaming Mao, Lu Liu and Yiyang Lu
J. Imaging 2025, 11(8), 253; https://doi.org/10.3390/jimaging11080253 - 28 Jul 2025
Viewed by 258
Abstract
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only [...] Read more.
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only require annotated data but also often involve heavy models with high computational costs, making them unsuitable for real-time processing. To tackle these challenges, a lightweight and unsupervised LLIE method utilizing a dual-stage frequency-domain calibration network (DFCNet) is proposed. In the first stage, the input image undergoes the preliminary feature modulation (PFM) module to guide the illumination estimation (IE) module in generating a more accurate illumination map. The final enhanced image is obtained by dividing the input by the estimated illumination map. The second stage is used only during training. It applies a frequency-domain residual calibration (FRC) module to the first-stage output, generating a calibration term that is added to the original input to darken dark regions and brighten bright areas. This updated input is then fed back to the PFM and IE modules for parameter optimization. Extensive experiments on benchmark datasets demonstrate that DFCNet achieves superior performance across multiple image quality metrics while delivering visually clearer and more natural results. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

50 pages, 9419 KiB  
Review
A Survey of Loss Functions in Deep Learning
by Caiyi Li, Kaishuai Liu and Shuai Liu
Mathematics 2025, 13(15), 2417; https://doi.org/10.3390/math13152417 - 27 Jul 2025
Viewed by 270
Abstract
Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing. Loss function determines the convergence speed and accuracy of the DL model and has a crucial impact on algorithm quality and [...] Read more.
Deep learning (DL), as a cutting-edge technology in artificial intelligence, has significantly impacted fields such as computer vision and natural language processing. Loss function determines the convergence speed and accuracy of the DL model and has a crucial impact on algorithm quality and model performance. However, most of the existing studies focus on the improvement of specific problems of loss function, which lack a systematic summary and comparison, especially in computer vision and natural language processing tasks. Therefore, this paper reclassifies and summarizes the loss functions in DL and proposes a new category of metric loss. Furthermore, this paper conducts a fine-grained division of regression loss, classification loss, and metric loss, elaborating on the existing problems and improvements. Finally, the new trend of compound loss and generative loss is anticipated. The proposed paper provides a new perspective for loss function division and a systematic reference for researchers in the DL field. Full article
(This article belongs to the Special Issue Advances in Applied Mathematics in Computer Vision)
Show Figures

Figure 1

18 pages, 516 KiB  
Article
A Nested Named Entity Recognition Model Robust in Few-Shot Learning Environments Using Label Description Information
by Hyunsun Hwang, Youngjun Jung, Changki Lee and Wooyoung Go
Appl. Sci. 2025, 15(15), 8255; https://doi.org/10.3390/app15158255 - 24 Jul 2025
Viewed by 239
Abstract
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general [...] Read more.
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general named entities. We enhance the Biaffine nested NER model by modifying its output layer to incorporate label semantic information through a novel label description embedding (LDE) approach, improving performance with limited training data. Our method replaces the traditional biaffine classifier with a label attention mechanism that leverages comprehensive natural language descriptions of entity types, encoded using BERT to capture rich semantic relationships between labels and input spans. We conducted comprehensive experiments on four benchmark datasets: GENIA (nested NER), ACE 2004 (nested NER), ACE 2005 (nested NER), and CoNLL 2003 English (flat NER). Performance was evaluated across multiple few-shot scenarios (1-shot, 5-shot, 10-shot, and 20-shot) using F1-measure as the primary metric, with five different random seeds to ensure robust evaluation. We compared our approach against strong baselines including BERT-LSTM-CRF with nested tags, the original Biaffine model, and recent few-shot NER methods (FewNER, FIT, LPNER, SpanNER). Results demonstrate significant improvements across all few-shot scenarios. On GENIA, our LDE model achieves 45.07% F1 in five-shot learning compared to 30.74% for the baseline Biaffine model (46.4% relative improvement). On ACE 2005, we obtain 44.24% vs. 32.38% F1 in five-shot scenarios (36.6% relative improvement). The model shows consistent gains in 10-shot (57.19% vs. 49.50% on ACE 2005) and 20-shot settings (64.50% vs. 58.21% on ACE 2005). Ablation studies confirm that semantic information from label descriptions is the key factor enabling robust few-shot performance. Transfer learning experiments demonstrate the model’s ability to leverage knowledge from related domains. Our findings suggest that incorporating label semantic information can substantially enhance NER models in low-resource settings, opening new possibilities for applying NER in specialized domains or languages with limited annotated data. Full article
(This article belongs to the Special Issue Applications of Natural Language Processing to Data Science)
Show Figures

Figure 1

21 pages, 8405 KiB  
Article
Distinct Mitochondrial DNA Deletion Profiles in Pediatric B- and T-ALL During Diagnosis, Remission, and Relapse
by Hesamedin Hakimjavadi, Elizabeth Eom, Eirini Christodoulou, Brooke E. Hjelm, Audrey A. Omidsalar, Dejerianne Ostrow, Jaclyn A. Biegel and Xiaowu Gai
Int. J. Mol. Sci. 2025, 26(15), 7117; https://doi.org/10.3390/ijms26157117 - 23 Jul 2025
Viewed by 473
Abstract
Mitochondria are critical for cellular energy, and while large deletions in their genome (mtDNA) are linked to primary mitochondrial diseases, their significance in cancer is less understood. Given cancer’s metabolic nature, investigating mtDNA deletions in tumors at various stages could provide insights into [...] Read more.
Mitochondria are critical for cellular energy, and while large deletions in their genome (mtDNA) are linked to primary mitochondrial diseases, their significance in cancer is less understood. Given cancer’s metabolic nature, investigating mtDNA deletions in tumors at various stages could provide insights into disease origins and treatment responses. In this study, we analyzed 148 bone marrow samples from 129 pediatric patients with B-cell (B-ALL) and T-cell (T-ALL) acute lymphoblastic leukemia at diagnosis, remission, and relapse using long-range PCR, next-generation sequencing, and the Splice-Break2 pipeline. Both T-ALL and B-ALL exhibited significantly more mtDNA deletions than did the controls, with T-ALL showing a ~100-fold increase and B-ALL a ~15-fold increase. The T-ALL samples also exhibited larger deletions (median size > 2000 bp) and greater heterogeneity, suggesting increased mitochondrial instability. Clustering analysis revealed distinct deletion profiles between ALL subtypes and across disease stages. Notably, large clonal deletions were detected in some B-ALL remission samples, including one affecting up to 88% of mtDNA molecules, which points toward treatment-driven selection or toxicity. A multivariate analysis confirmed that disease type, timepoint, and WHO subtype significantly influenced mtDNA deletion metrics, while age and gender did not. These findings suggest that mtDNA deletion profiling could serve as a biomarker for pediatric ALL and may indicate mitochondrial toxicity contributing to late effects in survivors. Full article
(This article belongs to the Special Issue Mitochondrial Function in Human Health and Disease: 2nd Edition)
Show Figures

Figure 1

26 pages, 663 KiB  
Article
An Information-Theoretic Framework for Retrieval-Augmented Generation Systems
by Semih Yumuşak
Electronics 2025, 14(15), 2925; https://doi.org/10.3390/electronics14152925 - 22 Jul 2025
Viewed by 355
Abstract
Retrieval-Augmented Generation (RAG) systems have emerged as a critical approach for enhancing large language models with external knowledge, yet the field lacks systematic theoretical analysis for understanding their fundamental characteristics and optimization principles. A novel information-theoretic approach for analyzing and optimizing RAG systems [...] Read more.
Retrieval-Augmented Generation (RAG) systems have emerged as a critical approach for enhancing large language models with external knowledge, yet the field lacks systematic theoretical analysis for understanding their fundamental characteristics and optimization principles. A novel information-theoretic approach for analyzing and optimizing RAG systems is introduced in this paper by modeling them as cascading information channel systems where each component (query encoding, retrieval, context integration, and generation) functions as a distinct information-theoretic channel with measurable capacity. Following established practices in information theory research, theoretical insights are evaluated through systematic experimentation on controlled synthetic datasets that enable precise manipulation of schema entropy and isolation of information flow dynamics. Through this controlled experimental approach, the following key theoretical insights are supported: (1) RAG performance is bounded by the minimum capacity across constituent channels, (2) the retrieval channel represents the primary information bottleneck, (3) errors propagate through channel-dependent mechanisms with specific interaction patterns, and (4) retrieval capacity is fundamentally limited by the minimum of embedding dimension and schema entropy. Both quantitative metrics for evaluating RAG systems and practical design principles for optimization are provided by the proposed approach. Retrieval improvements yield 58–85% performance gains and generation improvements yield 58–110% gains, substantially higher than context integration improvements (∼9%) and query encoding modifications, as shown by experimental results on controlled synthetic environments, supporting the theoretical approach. A systematic theoretical analysis for understanding RAG system dynamics is provided by this work, with real-world validation and practical implementation refinements representing natural next phases for this research. Full article
(This article belongs to the Special Issue Advanced Natural Language Processing Technology and Applications)
Show Figures

Figure 1

31 pages, 23687 KiB  
Article
Spatiotemporal Dynamics of Ecosystem Services and Human Well-Being in China’s Karst Regions: An Integrated Carbon Flow-Based Assessment
by Yinuo Zou, Yuefeng Lyu, Guan Li, Yanmei Ye and Cifang Wu
Land 2025, 14(8), 1506; https://doi.org/10.3390/land14081506 - 22 Jul 2025
Viewed by 305
Abstract
The relationship between ecosystem services (ESs) and human well-being (HWB) is a central issue of sustainable development. However, current research often relies on qualitative frameworks or indicator-based assessments, limiting a comprehensive understanding of the relationship between natural environment and human acquisition, which still [...] Read more.
The relationship between ecosystem services (ESs) and human well-being (HWB) is a central issue of sustainable development. However, current research often relies on qualitative frameworks or indicator-based assessments, limiting a comprehensive understanding of the relationship between natural environment and human acquisition, which still needs to be strengthened. As an element transferred in the natural–society coupling system, carbon can assist in characterizing the dynamic interactions within coupled human–natural systems. Carbon, as a fundamental element transferred across ecological and social spheres, offers a powerful lens to characterize these linkages. This study develops and applies a novel analytical framework that integrates carbon flow as a unifying metric to quantitatively assess the spatiotemporal dynamics of the land use and land cover change (LUCC)–ESs–HWB nexus in Guizhou Province, China, from 2000 to 2020. The results show that: (1) Ecosystem services in Guizhou showed distinct trends from 2000 to 2020: supporting and regulating services declined and then recovered, and provisioning services steadily increased, while cultural services remained stable but varied across cities. (2) Human well-being generally improved over time, with health remaining stable and the HSI rising across most cities, although security levels fluctuated and remained low in some areas. (3) The contribution of ecosystem services to human well-being peaked in 2010–2015, followed by declines in central and northern regions, while southern and western areas maintained or improved their levels. (4) Supporting and regulating services were positively correlated with HWB security, while cultural services showed mixed effects, with strong synergies between culture and health in cities like Liupanshui and Qiandongnan. Overall, this study quantified the coupled dynamics between ecosystem services and human well-being through a carbon flow framework, which not only offers a unified metric for cross-dimensional analysis but also reduces subjective bias in evaluation. This integrated approach provides critical insights for crafting spatially explicit land management policies in Guizhou and offers a replicable methodology for exploring sustainable development pathways in other ecologically fragile karst regions worldwide. Compared with conventional ecosystem service frameworks, the carbon flow approach provides a process-based, dynamic mediator that quantifies biogeochemical linkages in LUCC–ESs–HWB systems, which is particularly important in fragile karst regions. However, we acknowledge that further empirical comparison with traditional ESs metrics could strengthen the framework’s generalizability. Full article
(This article belongs to the Special Issue Advances in Land Consolidation and Land Ecology (Second Edition))
Show Figures

Graphical abstract

23 pages, 556 KiB  
Review
Evolving Wormholes in a Cosmological Background
by Mahdi Kord Zangeneh and Francisco S. N. Lobo
Universe 2025, 11(7), 236; https://doi.org/10.3390/universe11070236 - 19 Jul 2025
Viewed by 158
Abstract
Wormholes are non-trivial topological structures that arise as exact solutions to Einstein’s field equations, theoretically connecting distinct regions of spacetime via a throat-like geometry. While static traversable wormholes necessarily require exotic matter that violates the classical energy conditions, subsequent studies have sought to [...] Read more.
Wormholes are non-trivial topological structures that arise as exact solutions to Einstein’s field equations, theoretically connecting distinct regions of spacetime via a throat-like geometry. While static traversable wormholes necessarily require exotic matter that violates the classical energy conditions, subsequent studies have sought to minimize such violations by introducing time-dependent geometries embedded within cosmological backgrounds. This review provides a comprehensive survey of evolving wormhole solutions, emphasizing their formulation within both general relativity and alternative theories of gravity. We explore key developments in the construction of non-static wormhole spacetimes, including those conformally related to static solutions, as well as dynamically evolving geometries influenced by scalar fields. Particular attention is given to the wormholes embedded into Friedmann–Lemaître–Robertson–Walker (FLRW) universes and de Sitter backgrounds, where the interplay between the cosmic expansion and wormhole dynamics is analyzed. We also examine the role of modified gravity theories, especially in hybrid metric–Palatini gravity, which enable the realization of traversable wormholes supported by effective stress–energy tensors that do not violate the null or weak energy conditions. By systematically analyzing a wide range of time-dependent wormhole solutions, this review identifies the specific geometric and physical conditions under which wormholes can evolve consistently with null and weak energy conditions. These findings clarify how such configurations can be naturally integrated into cosmological models governed by general relativity or modified gravity, thereby contributing to a deeper theoretical understanding of localized spacetime structures in an expanding universe. Full article
(This article belongs to the Special Issue Experimental and Observational Constraints on Wormhole Models)
Show Figures

Figure 1

23 pages, 10912 KiB  
Article
ET: A Metaheuristic Optimization Algorithm for Task Mapping in Network-on-Chip
by Ke Li, Jingbo Shao and Yan Song
Electronics 2025, 14(14), 2846; https://doi.org/10.3390/electronics14142846 - 16 Jul 2025
Viewed by 225
Abstract
In Network-on-Chip (NoC) research, the task mapping problem has attracted considerable attention as a core issue influencing system performance. As an NP-hard problem, it remains challenging, and existing algorithms exhibit limitations in both mapping quality and computational efficiency. To address this, a method [...] Read more.
In Network-on-Chip (NoC) research, the task mapping problem has attracted considerable attention as a core issue influencing system performance. As an NP-hard problem, it remains challenging, and existing algorithms exhibit limitations in both mapping quality and computational efficiency. To address this, a method named ET (Enhanced Coati Optimization Algorithm) is proposed, which leverages the nature-inspired Coati Optimization Algorithm (COA) for task mapping. An incremental hill-climbing strategy is integrated to improve local search capabilities, and a dynamic mechanism for adjusting the exploration–exploitation ratio is designed to better balance global and local searches. Additionally, an initial mapping strategy based on spectral clustering is introduced, which utilizes inter-task communication strength to cluster tasks, thereby improving the quality of the initial population. To evaluate the effectiveness of the proposed algorithm, the performance of the ET algorithm is compared and analyzed against various existing algorithms in terms of communication cost, energy consumption, and latency, using both real benchmark task maps and randomly generated task maps. Experimental results demonstrate that the ET algorithm consistently outperforms the compared algorithms across all performance metrics, thereby confirming its superiority in addressing the NoC task mapping problem. Full article
Show Figures

Figure 1

27 pages, 3562 KiB  
Article
Automated Test Generation and Marking Using LLMs
by Ioannis Papachristou, Grigoris Dimitroulakos and Costas Vassilakis
Electronics 2025, 14(14), 2835; https://doi.org/10.3390/electronics14142835 - 15 Jul 2025
Cited by 1 | Viewed by 506
Abstract
This paper presents an innovative exam-creation and grading system powered by advanced natural language processing and local large language models. The system automatically generates clear, grammatically accurate questions from both short passages and longer documents across different languages, supports multiple formats and difficulty [...] Read more.
This paper presents an innovative exam-creation and grading system powered by advanced natural language processing and local large language models. The system automatically generates clear, grammatically accurate questions from both short passages and longer documents across different languages, supports multiple formats and difficulty levels, and ensures semantic diversity while minimizing redundancy, thus maximizing the percentage of the material that is covered in the generated exam paper. For grading, it employs a semantic-similarity model to evaluate essays and open-ended responses, awards partial credit, and mitigates bias from phrasing or syntax via named entity recognition. A major advantage of the proposed approach is its ability to run entirely on standard personal computers, without specialized artificial intelligence hardware, promoting privacy and exam security while maintaining low operational and maintenance costs. Moreover, its modular architecture allows the seamless swapping of models with minimal intervention, ensuring adaptability and the easy integration of future improvements. A requirements–compliance evaluation, combined with established performance metrics, was used to review and compare two popular multilingual LLMs and monolingual alternatives, demonstrating the system’s effectiveness and flexibility. The experimental results show that the system achieves a grading accuracy within a 17% normalized error margin compared to that of human experts, with generated questions reaching up to 89.5% semantic similarity to source content. The full exam generation and grading pipeline runs efficiently on consumer-grade hardware, with average inference times under 30 s. Full article
Show Figures

Figure 1

Back to TopTop