Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,625)

Search Parameters:
Keywords = information flow model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1673 KB  
Article
Emergence of the 2nd Law in an Exactly Solvable Model of a Quantum Wire
by Marco Antonio Jimenez-Valencia and Charles Allen Stafford
Entropy 2026, 28(3), 316; https://doi.org/10.3390/e28030316 - 11 Mar 2026
Abstract
As remarked by Boltzmann, the Second Law of Thermodynamics is notable for the fact that it is readily proved using elementary statistical arguments, but becomes harder and harder to verify the more precise the microscopic description of a system. In this article, we [...] Read more.
As remarked by Boltzmann, the Second Law of Thermodynamics is notable for the fact that it is readily proved using elementary statistical arguments, but becomes harder and harder to verify the more precise the microscopic description of a system. In this article, we investigate one particular realization of the 2nd Law, namely Joule heating in a wire under electrical bias. We analyze the production of entropy in an exactly solvable model of a quantum wire wherein the conserved flow of entropy under unitary quantum evolution is taken into account using an exact formula for the entropy current of a system of independent quantum particles. In this exact microscopic description of the quantum dynamics, the entropy production due to Joule heating does not arise automatically. Instead, we show that the expected entropy production is realized in the limit of a large number of local measurements by a series of floating thermoelectric probes along the length of the wire, which inject entropy into the system as a result of the information obtained via their continuous measurements of the system. The decoherence resulting from inelastic processes introduced by the local measurements is essential to the phenomenon of entropy production due to Joule heating, and would be expected to arise due to inelastic scattering in real systems of interacting particles. Full article
Show Figures

Figure 1

26 pages, 5380 KB  
Article
Analyzing Characteristics of Public Transport Complex Networks Based on Multi-Source Big Data Fusion: A Case Study of Cangzhou, China
by Linfang Zhou, Yongsheng Chen, Dongpu Ren and Qing Lan
Future Internet 2026, 18(3), 144; https://doi.org/10.3390/fi18030144 - 11 Mar 2026
Abstract
Quantitative evaluation of public transit networks (PTNs) with complex-network models informs route optimization and operational adjustments. Prior studies emphasize large cities and pay limited attention to small-sized urban systems. This study examines the bus network of Cangzhou City, Hebei Province, China, to broaden [...] Read more.
Quantitative evaluation of public transit networks (PTNs) with complex-network models informs route optimization and operational adjustments. Prior studies emphasize large cities and pay limited attention to small-sized urban systems. This study examines the bus network of Cangzhou City, Hebei Province, China, to broaden the empirical scope and characterize PTNs in smaller cities. The dataset for this study comprises route and stop records, passenger boarding logs, and bus GPS traces. We develop a general workflow for bus data cleaning and completion. To characterize the dynamic bus network and compare it with the static network, we construct a static network and Directed Weighted Dynamic Network I (DWDN I) using the L-space method, and we construct Directed Weighted Dynamic Network II (DWDN II) using the P-space method. We calculated network metrics including degree, weighted degree, clustering coefficient, path length, network diameter, network efficiency, and small-world coefficient. The principal results show that: (1) at the macroscopic level, the dynamic PTN tracks passenger demand, as the average degree, weighted average degree, and clustering coefficient fluctuate in concert with passenger flows; (2) key stations concentrate in the urban core, and stations with high weighted degree display pronounced spatial autocorrelation; (3) the exponential form of the weighted-degree distribution indicates that the examined bus network is not scale-free, while the dynamic network’s small-world coefficient exceeds that of the static network across time periods, reflecting stronger small-world characteristics. This study integrates network and spatial attributes of the PTN to offer an exploratory case for investigating public transit networks in third-tier cities. The findings can inform comparable studies and offer practical guidance for bus operators. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

35 pages, 7787 KB  
Article
LLM-ROM: A Novel Framework for Efficient Spatiotemporal Prediction of Urban Pollutant Dispersion
by Pin Wu, Zhiyi Qin and Yiguo Yang
AI 2026, 7(3), 104; https://doi.org/10.3390/ai7030104 - 11 Mar 2026
Abstract
Deep learning-based flow field prediction for microclimate pollutant dispersion represents an emerging and promising methodology, where effectively integrating meteorological, spatial, and temporal information remains a critical challenge. To address this, we propose a novel non-intrusive reduced-order model (ROM) that synergizes a Dilated Convolutional [...] Read more.
Deep learning-based flow field prediction for microclimate pollutant dispersion represents an emerging and promising methodology, where effectively integrating meteorological, spatial, and temporal information remains a critical challenge. To address this, we propose a novel non-intrusive reduced-order model (ROM) that synergizes a Dilated Convolutional Autoencoder (DCAE) with pre-trained large language models (LLMs). The DCAE, leveraging nonlinear mapping, was employed for extracting low-dimensional spatiotemporal flow field features. These features were then combined with textual prototypes via text embedding to enable few-shot inference using the LLM-based flow field prediction method. To optimize the utilization of pre-trained LLMs, we designed a specialized textual description template tailored for pollutant dispersion data, which enhances the contextual input of meteorological conditions to guide model predictions. Experimental validation through three-dimensional urban canyon simulations conclusively demonstrated the efficacy of the convolutional autoencoder and LLM-based framework in predicting pollutant dispersion flow fields. The proposed method exhibits remarkable transfer learning capabilities across varying street canyon geometries and meteorological conditions while significantly representing a 9.85× acceleration in prediction compared to Computational Fluid Dynamics (CFD). Full article
Show Figures

Figure 1

13 pages, 1762 KB  
Article
A Flexible Voltage-Regulation Method for Distribution Networks Based on Pseudo-Measurement-Assisted State Estimation
by Jiannan Qu, Xianglong Meng, Bo Zhang and Zhenhao Wang
Energies 2026, 19(6), 1405; https://doi.org/10.3390/en19061405 - 11 Mar 2026
Abstract
To address the unobservability of distribution networks caused by insufficient coverage of measurement terminals as well as communication failures and missing data, and to cope with operating-state fluctuations induced by distributed generation integration and external environmental disturbances, this paper proposes an integrated state-estimation [...] Read more.
To address the unobservability of distribution networks caused by insufficient coverage of measurement terminals as well as communication failures and missing data, and to cope with operating-state fluctuations induced by distributed generation integration and external environmental disturbances, this paper proposes an integrated state-estimation and voltage-regulation strategy that combines distribution-network-partitioning-based optimal PMU placement with pseudo-measurement construction using power transfer distribution factors (PTDFs). First, nodal reactive-power sensitivity information is derived from the power-flow Jacobian matrix, and an improved modularity function is employed to obtain the optimal partitioning of the distribution network, based on which PMUs are deployed at partition boundary buses. Second, PTDF-based power pseudo-measurements are constructed for unobservable buses and incorporated into the measurement model via a measurement transformation; a weighted least-squares method is then adopted to achieve system-wide state estimation. Finally, the estimated voltage states are fed into flexible voltage-regulation devices to enable fast and continuous voltage adjustment across buses. Case studies on the IEEE 33-bus system demonstrate that the proposed method effectively improves voltage quality. Full article
Show Figures

Figure 1

25 pages, 639 KB  
Article
AI-Assisted Value Investing: A Human-in-the-Loop Framework for Prompt-Guided Financial Analysis and Decision Support
by Andrea Caridi, Marco Giovannini and Lorenzo Ricciardi Celsi
Electronics 2026, 15(6), 1155; https://doi.org/10.3390/electronics15061155 - 10 Mar 2026
Abstract
Value investing remains grounded in intrinsic value estimation, margin-of-safety reasoning, and disciplined fundamental analysis, but its practical execution is increasingly constrained by the scale, heterogeneity, and velocity of modern financial information. Recent advances in artificial intelligence (AI), particularly large language models and automated [...] Read more.
Value investing remains grounded in intrinsic value estimation, margin-of-safety reasoning, and disciplined fundamental analysis, but its practical execution is increasingly constrained by the scale, heterogeneity, and velocity of modern financial information. Recent advances in artificial intelligence (AI), particularly large language models and automated information-extraction systems, create new opportunities to accelerate financial analysis; however, their outputs remain probabilistic, context-dependent, and potentially error-prone, making governance and verification essential. This article proposes an AI-assisted value investing framework that integrates automated extraction, valuation modeling, explainability, and human-in-the-loop (HITL) supervision into a unified decision-support architecture. The framework is organized into three layers: (i) a data layer for traceable extraction and normalization of structured and unstructured financial information; (ii) a modeling layer for automated key performance indicator (KPI) computation, forecasting support, and discounted cash flow (DCF) valuation; and (iii) an explainability and governance layer for traceability, verification, model-risk control, and analyst oversight. A central contribution of the paper is the operational characterization of prompt literacy as a determinant of analytical reliability, showing that structured, context-aware prompts materially affect extraction correctness, usability, and verification effort. The framework is evaluated through a case study using Rivanna AI on three large U.S. beverage firms—namely, The Coca-Cola Company, PepsiCo, and Keurig Dr Pepper—selected as a controllead, information-rich setting for comparative analysis. The results indicate that the proposed workflow can reduce end-to-end analysis time from approximately 25–40 h in a traditional manual process to approximately 8–12 h in an AI-assisted setting, including citation/source verification, unit and period reconciliation, and review of key valuation assumptions. Rather than eliminating analyst effort, AI shifts it from manual information processing toward verification, adjudication, and interpretation. Overall, the findings position AI not as an autonomous decision-maker, but as a governed reasoning accelerator whose effectiveness depends on structured human guidance, traceability, and disciplined validation. In value investing, a discipline traditionally grounded in labor-intensive fundamental analysis and disciplined intrinsic value estimation, AI introduces the potential to scale analytical coverage and accelerate evidence synthesis. However, AI systems in financial contexts are probabilistic, context-sensitive, and inherently dependent on human interaction, raising critical questions about reliability, governance, and operational integration. This article proposes a structured framework for AI-driven value investing that preserves the foundational principles of intrinsic value, margin of safety, and economic reasoning, while redesigning the analytical workflow through automation, explainability, and human-in-the-loop (HITL) supervision. The proposed architecture integrates three layers: (i) an AI-enabled data layer for traceable extraction and normalization of structured and unstructured financial information; (ii) a modeling and valuation layer combining automated KPI computation, machine learning forecasting, and discounted cash flow (DCF) valuation; and (iii) an explainability and governance layer ensuring traceability, verification, and model risk control. A central contribution of this work is the operational characterization of prompt literacy, namely the ability to formulate structured, context-aware requests to AI systems, as a critical determinant of system reliability and analytical correctness. Through a focused case study using an AI-assisted analysis platform (Rivanna AI) on three U.S. beverage firms, we provide evidence that structured prompt formulation can improve extraction consistency, reduce verification overhead, and increase workflow efficiency in a human-supervised setting. In this setting, analysis time decreased from a manual range of approximately 25–40 h to 8–12 h with AI assistance and HITL validation, while preserving traceability and decision accountability. The reported hour savings should be interpreted as conservative estimates from the initial deployment phase; additional efficiency gains are expected as operational maturity increases, driven by learning-economy effects. The findings position AI not as an autonomous decision-maker but as a probabilistic reasoning accelerator whose effectiveness depends on structured human guidance, verification discipline, and prompt-driven interaction. These results redefine the role of the financial analyst from manual data processor to reasoning architect, responsible for designing, guiding, and validating AI-assisted analytical workflows. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

24 pages, 7190 KB  
Article
Effects of Loading Direction on Mechanical Behavior of Core–Shell Cu-Al Nanoparticles Under Uniform Compressive Loading-Molecular Dynamics Study
by Phillip Tomich, Michael Zawadzki and Iman Salehinia
Crystals 2026, 16(3), 186; https://doi.org/10.3390/cryst16030186 - 10 Mar 2026
Abstract
The mechanical behavior of metallic core–shell nanoparticles is critical for their use as reinforcement particles and additive manufacturing feedstocks, yet their deformation mechanisms remain incompletely understood. This study employs molecular dynamics simulations to investigate the compressive response of a Cu-core/Al-shell nanoparticle and compares [...] Read more.
The mechanical behavior of metallic core–shell nanoparticles is critical for their use as reinforcement particles and additive manufacturing feedstocks, yet their deformation mechanisms remain incompletely understood. This study employs molecular dynamics simulations to investigate the compressive response of a Cu-core/Al-shell nanoparticle and compares it with solid Cu, solid Al, and a hollow Al shell of the same size under uniaxial loading along ⟨100⟩, ⟨110⟩, ⟨111⟩, and ⟨112⟩ directions. The single-material nanoparticles show strong anisotropy: solid Cu exhibits orientation-dependent transitions from dislocation slip to deformation twinning, while introducing a void to form a hollow Al shell reduces stiffness and strength, confines plasticity to the shell wall, and suppresses extended load-bearing twins. The Cu–Al core–shell nanoparticle combines these behaviors in an orientation-dependent manner. Under ⟨110⟩ and ⟨112⟩ loading, deformation is largely shell-dominated, whereas ⟨100⟩ and ⟨111⟩ loading more strongly activates the Cu core. Mechanistically, ⟨100⟩ is characterized by Shockley partial activity and junction/lock formation in the Al shell coupled with twinning in the Cu core; ⟨110⟩ shows primarily shell partials with limited core involvement; ⟨111⟩ promotes partial-dislocation activity in both shell and core; and ⟨112⟩ produces localized, twin-dominated bands in the Al shell with shell-thickness-dependent twin extension into the Cu core. These trends are rationalized using Schmid factor considerations for 111110 slip and 111112 partial/twinning shear, together with the effects of faceted free surfaces and the Cu–Al interface. The core–shell geometry enables two concurrent interface-mediated pathways, i.e., (i) stress transfer and reduced cross-interface transmission and (ii) circumferential bypass within the shell, which together yield only slight flow-stress increases over solid Al while markedly reducing stress serrations compared with both solid Cu and solid Al. Across all orientations, the core–shell structures also exhibit delayed yielding (higher yield strain) relative to solid Cu, indicating enhanced ductility. The results provide an atomistic basis for designing Cu–Al core–shell nanoparticles for robust particle-based processing and additive manufacturing feedstock, and for informing multiscale models with mechanism-resolved, orientation-dependent inputs. Full article
Show Figures

Figure 1

25 pages, 3445 KB  
Article
Declared-Unit-Based Life-Cycle Carbon-Emission Evaluation of Machine Tools: Method and Case Study Considering Milling Cutter Coated with TiAlSiN
by Zhipeng Jiang, Youheng Shi, Xianli Liu, Guohua Zheng, Yuxin Jia and Yue Meng
Coatings 2026, 16(3), 342; https://doi.org/10.3390/coatings16030342 - 10 Mar 2026
Abstract
Aiming at the problem of non-uniform and non-universal evaluation criteria of machine tools’ carbon emissions in the whole life-cycle analysis, an evaluation method of life-cycle carbon-emission analysis of machine tool based on declared unit was put forward by analyzing and summarizing the existing [...] Read more.
Aiming at the problem of non-uniform and non-universal evaluation criteria of machine tools’ carbon emissions in the whole life-cycle analysis, an evaluation method of life-cycle carbon-emission analysis of machine tool based on declared unit was put forward by analyzing and summarizing the existing carbon emission evaluation models. A universal evaluation system for machine-tool life-cycle carbon-emission analysis was first established, and an appropriate declared unit was then selected according to industry characteristics and machine-tool types. Subsequently, an information-flow-based iERWC boundary division method was proposed to support data collection and carbon-emission calculation across five life-cycle stages. To better reflect carbon emissions in the phase of application, the life-cycle inventory incorporates the use of coated cutters, including the associated cutters’ consumption and replacement demands. Two heavy duty floor-milling and boring machine tools produced by Qiqihar No. 2 Machine Tool (Group) Co., Ltd. Were taken as examples to calculate and evaluate life-cycle carbon-emission analysis of machine tools, and the uncertainty analysis was carried out; the possible influencing factors were pointed out to ensure the comprehensive carbon-emission assessment of the whole life cycle. Full article
Show Figures

Figure 1

43 pages, 5766 KB  
Article
Enhancing Sustainable Waste-to-Energy: A Multi-Controlled Variable Prediction Model for Municipal Solid Waste Incineration Using Shared Features and an Improved Fuzzy Neural Network
by Qiumei Cong, Jiaying Lu and Jian Tang
Sustainability 2026, 18(5), 2616; https://doi.org/10.3390/su18052616 - 7 Mar 2026
Viewed by 152
Abstract
Municipal solid waste incineration (MSWI) is a critical technology for advancing urban sustainability, contributing to improved environmental quality, optimized energy structures, and the circular economy. However, the realization of these sustainability benefits is contingent upon the stable, efficient, and low-emission operation of the [...] Read more.
Municipal solid waste incineration (MSWI) is a critical technology for advancing urban sustainability, contributing to improved environmental quality, optimized energy structures, and the circular economy. However, the realization of these sustainability benefits is contingent upon the stable, efficient, and low-emission operation of the incineration process. This operational stability is directly governed by several key variables, such as furnace temperature, main steam flow rate, flue gas oxygen content, and burnout point temperature. The inherent complexity of controlling these interconnected variables necessitates the development of an accurate multi-variable prediction model to ensure both energy recovery efficiency and environmental compliance, which are core pillars of sustainable waste management. Existing studies have often addressed these key controlled variables in isolation, lacking a unified modeling framework. Furthermore, they have not adequately considered how dimensional differences among these variables impact the performance evaluation of predictive models, a critical oversight for ensuring holistic process sustainability. To address these gaps and support the intelligent operation of sustainable waste-to-energy systems, this study proposes a novel multi-controlled variable modeling method based on shared features and an improved fuzzy neural network. Our integrated approach begins by calculating the Pearson correlation coefficient between each manipulated variable and each controlled variable—selected based on expert knowledge—to assess the distinguishability of operating conditions within the current dataset. Subsequently, a correlation threshold, informed by expert knowledge, is applied to identify shared features that influence multiple controlled variables simultaneously. Finally, we enhance the fuzzy neural network by redefining its evaluation criterion to accommodate variable dimensional differences, leading to the development of a robust multi-controlled variable prediction model. This model is designed to provide a more comprehensive and accurate basis for process control, directly contributing to improved energy efficiency and reduced environmental impact. The effectiveness of our proposed model is validated using operational data from an actual MSWI plant, demonstrating its potential to support more sustainable and economically viable waste-to-energy operations. Full article
Show Figures

Figure 1

18 pages, 2661 KB  
Article
Impedance Sensor Based on ZnO/Graphite Composite with 3D-Printed Housing for Ionized Ammonia Detection in Continuous Water Flow
by Jorge A. Uc-Martín and Roberto G. Ramírez-Chavarría
Chemosensors 2026, 14(3), 64; https://doi.org/10.3390/chemosensors14030064 - 6 Mar 2026
Viewed by 184
Abstract
High concentrations of ionized ammonia (NH4+) have been increasingly reported in municipal drinking water systems, posing a severe public health risk as excessive ingestion can lead to life-threatening conditions. Despite its importance, there is a significant lack of sensing [...] Read more.
High concentrations of ionized ammonia (NH4+) have been increasingly reported in municipal drinking water systems, posing a severe public health risk as excessive ingestion can lead to life-threatening conditions. Despite its importance, there is a significant lack of sensing technologies designed for continuous-flow monitoring outside laboratory settings, particularly those providing a robust, low-cost methodology suitable for resource-limited environments. To address these challenges, in this work, we report the development of an impedance sensor featuring a 3D-printed housing (3D-IS) for monitoring aqueous ionized ammonia (NH4+). The sensing electrodes, composed of zinc oxide and graphite, allow for the detection of concentrations 10 times lower and 60 times higher than current environmental limits. Its innovative, optimized design, analogous to that of industrial pressure gauges, highlights its potential for use in continuous water flow conditions outside the laboratory, such as water treatment plants. The level of NH4+ in water is monitored by changes in impedance magnitude, with optimal performance observed at a frequency of 100 kHz. At this frequency, the impedance magnitude decreased by nearly two orders of magnitude as the NH4+ concentration increased from 0 to 1 μM. Under these optimized conditions, the sensor exhibited a sensitivity of 2 kΩ/log(μM) and a linearity exceeding 90%. Furthermore, we propose an equivalent circuit model that accurately describes the experimental data, explaining the transduction process. We also describe, from an electrical perspective, the phenomenon of adsorption on the sensor’s transducer surface, thereby ensuring the device’s selectivity. The sensor was evaluated using dilutions of a standard ammonium solution for IC in distilled water, as well as with real groundwater samples, obtaining ∼99.7% of correlation with ion chromatography and a limit of detection of 2 μM. Finally, our device can provide information relatively quickly, with the added advantage of stable response under continuous-flow and real conditions, making it an attractive option for integration into a field sensor node. Full article
Show Figures

Graphical abstract

28 pages, 4247 KB  
Article
BiMS-Pose: Enhancing Human Pose Estimation in Orchard Spraying Scenarios via Bidirectional Multi-Scale Collaboration
by Yuhang Ren, Zichen Yang, Hanxin Chen, Zhuochao Chen and Daojin Yao
Agriculture 2026, 16(5), 606; https://doi.org/10.3390/agriculture16050606 - 6 Mar 2026
Viewed by 103
Abstract
Most 2D human pose estimation frameworks utilize static designs for multi-scale feature fusion, where information from various scales is integrated using fixed weights. A drawback of these approaches is that they often lead to localization biases in complex scenarios. This paper addresses the [...] Read more.
Most 2D human pose estimation frameworks utilize static designs for multi-scale feature fusion, where information from various scales is integrated using fixed weights. A drawback of these approaches is that they often lead to localization biases in complex scenarios. This paper addresses the issues of multi-scale feature mismatch and joint localization biases in pose estimation. From the perspective of feature processing, multi-scale weights must be adapted to the size and position of joints, while joint predictions should adhere to human anatomical constraints. Existing methods lack effective dynamic adaptation, structural constraints, and bidirectional complementarity between high-level semantics and low-level details. They often experience localization biases in occluded scenarios, and the peaks of their heatmaps demonstrate insufficient consistency with the actual positions of the joints. Through theoretical analysis, we identify the causes of performance gaps and propose directions for narrowing them. We propose Bidirectional Multi-Scale Collaborative Pose Estimation (BiMS-Pose), a framework that introduces dynamic weights to adjust feature proportions, establishes bidirectional topological constraints for joint relationships, and integrates a bidirectional attention flow. The framework filters key information from three dimensions, adjusts filtering strategies in real time, and is enhanced by heatmap optimization to improve localization accuracy. Extensive experiments conducted on COCO, MPII, and our self-built Orchard Spraying Pose Dataset (OSPD) demonstrate the effectiveness of BiMS-Pose. In general scenarios, it achieves a significant 1.2 percentage-point increase in average precision (AP) on the COCO val2017 dataset compared to ViTPose while utilizing the same backbone. In agricultural orchard spraying scenarios, it effectively addresses interference factors such as changes in illumination, occlusion, and varying shooting distances, achieving 75.4% average precision (AP) and 90.7% percent of correct keypoints (PCKh@0.5) on the OSPD dataset. Additionally, it maintains an average frame rate of 18.3 FPS on embedded devices, effectively meeting the requirements for real-time monitoring. This highlights the model’s potential for precise, stable, and practical human pose estimation in both general and agricultural application scenarios. Full article
(This article belongs to the Special Issue Application of Smart Technologies in Orchard Management)
Show Figures

Figure 1

24 pages, 4693 KB  
Article
A Short-Term Photovoltaic Power Prediction Based on Multidimensional Feature Fusion of Satellite Cloud Images
by Lingling Xie, Chunhui Li, Yanjing Luo and Long Li
Processes 2026, 14(5), 846; https://doi.org/10.3390/pr14050846 - 5 Mar 2026
Viewed by 209
Abstract
Clouds are a key factor affecting solar radiation, and their dynamic variations directly cause uncertainty and fluctuations in photovoltaic (PV) power output. To improve PV power prediction accuracy, this paper proposes an enhanced short-term photovoltaic power forecasting approach based on a hybrid neural [...] Read more.
Clouds are a key factor affecting solar radiation, and their dynamic variations directly cause uncertainty and fluctuations in photovoltaic (PV) power output. To improve PV power prediction accuracy, this paper proposes an enhanced short-term photovoltaic power forecasting approach based on a hybrid neural network architecture using features extracted from satellite cloud images. First, a dual-layer image fusion method is developed for satellite cloud images from different wavelengths and spectral bands, effectively improving fusion accuracy. Second, texture descriptors derived from the Gray-Level Co-occurrence Matrix and multiscale information obtained via the wavelet transform are employed for feature extraction from fused images. Combined with a residual network (ResNet), an optical flow method, as well as an LSTM-based temporal modeling module, multidimensional features of the predicted cloud images are obtained. An improved Bayesian optimization (IBO) algorithm is then employed to derive the optimal fused features, thereby improving the matching between cloud image features and PV power. Third, an enhanced hybrid architecture integrating a convolutional neural network and long short-term memory units with a multi-head self-attention mechanism is developed. Numerical weather prediction (NWP) meteorological features are incorporated, and a tilted irradiance model is introduced to calculate the solar irradiance received by PV modules for use in near-term photovoltaic power forecasting. Finally, measurements collected at a photovoltaic power plant located in Hebei Province are used to validate the proposed method. The results show that, relative to the SA-CNN-MSA-LSTM and BO-CNN-LSTM models, the developed approach lowers the RMSE to an extent of 22.56% and 4.32%, while decreasing the MAE by 24.84% and 5.91%, respectively. Overall, the proposed model accurately captures the characteristics of predicted cloud images and effectively improves PV power prediction accuracy. Full article
(This article belongs to the Special Issue Process Safety and Control Strategies for Urban Clean Energy Systems)
Show Figures

Figure 1

17 pages, 3070 KB  
Article
Assessing the Impact of Forests on Wind Flow Dynamics and Wind Turbine Energy Production
by Svetlana Orlova, Nikita Dmitrijevs, Marija Mironova, Edmunds Kamolins and Vitalijs Komasilovs
Wind 2026, 6(1), 10; https://doi.org/10.3390/wind6010010 - 5 Mar 2026
Viewed by 160
Abstract
Forests play a vital role in influencing wind flow by modifying turbulence intensity and vertical wind shear. Because wind turbines are susceptible to these conditions, accurately characterising wind flow in forested environments is vital to ensuring structural reliability and realistic energy-yield assessments. In [...] Read more.
Forests play a vital role in influencing wind flow by modifying turbulence intensity and vertical wind shear. Because wind turbines are susceptible to these conditions, accurately characterising wind flow in forested environments is vital to ensuring structural reliability and realistic energy-yield assessments. In Latvia, where approximately 51.3% of the territory is covered by forests; the likelihood of wind turbine deployment in such areas is considerable. However, wind behaviour within and above forests is complex and strongly influenced by canopy effects, which in turn affect wake dynamics, structural fatigue, and power production. Advancing research in this field is therefore crucial for improving the accuracy of wind resource assessment and supporting evidence-based engineering solutions that enable the sustainable development of wind energy. Wind conditions were evaluated using NORA3 reanalysis data. Wake effects were simulated with the Jensen wake model to estimate annual energy production (AEP), which then informed levelised cost of energy (LCOE) calculations at various hub heights. The results indicate clear seasonal variability and show that increasing hub height leads to higher AEP and lower LCOE, owing to higher wind speeds and reduced turbulence. For forest heights of 0–25 m, the AEP loss increases from 7.8% (hub height = 199 m) to 22.9% (hub height = 114 m). Higher hub heights are also less sensitive to canopy-induced variability, reducing the impact of forest-related turbulence on energy production. Full article
Show Figures

Figure 1

25 pages, 1948 KB  
Article
VDTAR-Net: A Cooperative Dual-Path Convolutional Neural Network–Transformer Network for Robust Highlight Reflection Segmentation
by Qianlong Zhang and Yue Zeng
Computers 2026, 15(3), 168; https://doi.org/10.3390/computers15030168 - 4 Mar 2026
Viewed by 147
Abstract
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent [...] Read more.
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent “object assumption.” Conversely, pure transformer models often lose high-frequency boundary details and incur substantial computational costs. To tackle these challenges, this paper introduces VDTAR-Net, a specialized framework adapted to address the unique optical characteristics of specular reflections. Building upon hybrid architectures, our contribution focuses on two core mechanisms: (1) a Cross-architecture Fusion Module (CFM) that enables deep, bidirectional information flow, allowing the Transformer’s global illumination modeling to continuously correct the CNN’s local texture biases; and (2) a Reflective-Aware Module (RAM), which explicitly integrates the physical prior of high-intensity saturation into the attention mechanism. This task-specific design significantly enhances sensitivity to boundary details in overexposed regions. We also created the first large-scale, expert-labeled cervical white light segmentation dataset, Cervix-WL-900. High-quality ground truth labels were generated through rigorous double-blind annotation and arbitration by senior experts. Experimental results show that VDTAR-Net achieves a Dice score of 92.56% and a mean Intersection over Union (mIoU) score of 87.31% on Cervix-WL-900, demonstrating superior performance compared to methods like U-Net, DeepLabv3+, SegFormer, and PSPNet. Ablation studies further confirm the substantial contributions of dual-path collaboration, CFM deep fusion, and RAM task-specific priors. VDTAR-Net provides a robust baseline for precise highlight segmentation, laying a foundation for subsequent image quality assessment, restoration, and feature decoupling in diagnostic models. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

19 pages, 5093 KB  
Article
Extreme Hydrological Events and Land Cover Impacts on Water Resources in Haiti: Remote Sensing and Modeling Tools Can Improve Adaptation Planning
by Jeldane Joseph, Suranjana Chatterjee, Joseph J. Molnar and Frances O’Donnell
Hydrology 2026, 13(3), 79; https://doi.org/10.3390/hydrology13030079 - 3 Mar 2026
Viewed by 178
Abstract
Populations in areas with limited hydrological data face ongoing challenges related to water supply and management, with climate change increasing the risks of floods and droughts. New remote sensing and modeling tools can improve land and water management in these regions, especially when [...] Read more.
Populations in areas with limited hydrological data face ongoing challenges related to water supply and management, with climate change increasing the risks of floods and droughts. New remote sensing and modeling tools can improve land and water management in these regions, especially when combined with limited ground measurements and local knowledge of extreme events. This study examined hydrological extremes and land cover change impacts in the Grande Rivière du Nord watershed, Haiti, using satellite and model-based data. Precipitation extremes were obtained from the Global Precipitation Measurement Integrated Multi-satellite Retrievals for GPM (GPM IMERG; 2000–2025), and streamflow data were sourced from the Group on Earth Observation Global Water Sustainability (GEOGLOWS) system and bias-corrected with a small historical hydrologic database. Annual maximum series were created and fitted with Gumbel, Lognormal, and Generalized Extreme Value (GEV) distributions using the L-moment method. Goodness-of-fit tests identified the best models, and precipitation amounts for return periods of 2–100 years were estimated. The precipitation maxima aligned with locally reported extreme events, and GEV provided the best overall fit. Using the bias-corrected streamflow, a hydrologic model was calibrated and validated and then applied to land cover change scenarios. Simulations suggest that moderate land-use change can increase peak flows beyond channel capacity, raising flood risk and informing adaptation planning in northern Haiti, which has limited data. Full article
(This article belongs to the Special Issue The Influence of Landscape Disturbance on Catchment Processes)
Show Figures

Figure 1

23 pages, 3274 KB  
Article
Question-Aware Reasoning Framework via Two-Level Cross-Attention
by Junhui Bai, Jun Wu, Mingyu Li, Shichao Yu, Ziming Jiang and Yinghui Wang
Mathematics 2026, 14(5), 857; https://doi.org/10.3390/math14050857 - 3 Mar 2026
Viewed by 218
Abstract
Multimodal chain-of-thought (CoT) reasoning has emerged as a pivotal research direction in artificial intelligence. However, current approaches predominantly adopt a linear CoT structure with a single reasoning module and complex multi-level gated multi-hop cross-attention mechanisms for modality fusion, which exhibit notable limitations. Specifically, [...] Read more.
Multimodal chain-of-thought (CoT) reasoning has emerged as a pivotal research direction in artificial intelligence. However, current approaches predominantly adopt a linear CoT structure with a single reasoning module and complex multi-level gated multi-hop cross-attention mechanisms for modality fusion, which exhibit notable limitations. Specifically, the inability of linear CoT structures to dynamically select appropriate reasoning modules based on problem characteristics often leads to hallucinations during intermediate reasoning. Moreover, tightly coupled gating and cross-attention mechanisms can inadvertently suppress critical information flow during inter-modal interactions, resulting in erroneous predictions. To address these challenges, we propose a novel multimodal reasoning framework, M-TCM, that integrates a two-level cross-attention fusion mechanism with a single-level gating strategy. This design not only reduces the complexity of modality fusion but also effectively preserves information crucial for intermediate reasoning. Furthermore, M-TCM incorporates a novel module selection strategy. We first construct a new dataset, SQ-GPT4, to complement the existing ScienceQA dataset and facilitate the training of two distinct reasoning modules. Subsequently, the model dynamically selects the most appropriate reasoning module for prediction based on the specific skill requirements of each problem. Experimental results on the ScienceQA benchmark demonstrate the superiority of our proposed model, achieving a prediction accuracy of 88.23%. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

Back to TopTop