Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,600)

Search Parameters:
Keywords = precise hybridization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1952 KB  
Review
Comparative Review of Reactive Power Estimation Techniques for Voltage Restoration
by Natanael Faleiro, Raul Monteiro, André Fonseca, Lina Negrete, Rogério Lima and Jakson Bonaldo
Energies 2026, 19(3), 826; https://doi.org/10.3390/en19030826 - 4 Feb 2026
Abstract
With the focus on the growing concern of voltage instability and its inherent risks connected to blackouts, this study addresses the importance of Volt/VAR control (VVC) in maintaining voltage stability, optimizing power factor, and reducing losses. As such, this scientific article presents a [...] Read more.
With the focus on the growing concern of voltage instability and its inherent risks connected to blackouts, this study addresses the importance of Volt/VAR control (VVC) in maintaining voltage stability, optimizing power factor, and reducing losses. As such, this scientific article presents a review of the methodologies used to estimate the quantity of reactive power required to restore voltage in power grids. Although reviews exist on classical methods, optimization, and machine learning, a study unifying these approaches is lacking. This gap hinders an integrated comparison of methodologies and constitutes the main motivation for this study in 2025. This absence of a consolidated and up-to-date review limits both academic progress and practical decision-making in modern power systems, especially as DER penetration accelerates. This research was conducted using the Scopus database through the selection of articles that address reactive power estimation methods. The results indicate that traditional numerical and optimization methods, although accurate, demonstrate high computational costs for real-time application. In contrast, techniques such as Deep Reinforcement Learning (DRL) and hybrid models show greater potential for dealing with uncertainties and dynamic topologies. The conclusion reached is that the solution for reactive power management lies in hybrid approaches, which combine machine learning with numerical methods, supported by an intelligent and robust data infrastructure. The comparative analysis shows that numerical methods offer high precision but are computationally expensive for real-time use; optimization techniques provide good robustness but depend on detailed models that are sensitive to system conditions; and machine learning-based approaches offer greater adaptability under uncertainty, although they require large datasets and careful training. Given these complementary limitations, hybrid approaches emerge as the most promising alternative, combining the reliability of classical methods with the flexibility of intelligent models, especially in smart grids with dynamic topologies and high penetration of Distributed Energy Resources (DERs). Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

22 pages, 1982 KB  
Article
Enhanced 3D DenseNet with CDC for Multimodal Brain Tumor Segmentation
by Bekir Berkcan and Temel Kayıkçıoğlu
Appl. Sci. 2026, 16(3), 1572; https://doi.org/10.3390/app16031572 - 4 Feb 2026
Abstract
Precise tumor segmentation in multimodal MRI is crucial for glioma diagnosis and treatment planning; yet, deep learning models still struggle with irregular boundaries and severe class imbalance under computational constraints. An Enhanced 3D DenseNet with CDC architecture was proposed, integrating Central Difference Convolution, [...] Read more.
Precise tumor segmentation in multimodal MRI is crucial for glioma diagnosis and treatment planning; yet, deep learning models still struggle with irregular boundaries and severe class imbalance under computational constraints. An Enhanced 3D DenseNet with CDC architecture was proposed, integrating Central Difference Convolution, attention gates, and Atrous Spatial Pyramid Pooling for brain tumor segmentation on the BraTS 2023-GLI dataset. CDC layers enhance boundary sensitivity by combining intensity-level semantics and gradient-level features. Attention gates selectively emphasize relevant encoder features during skip connections, whereas the ASPP captures the multi-scale context with dilation rates. A hybrid loss function spanning three levels was introduced, consisting of a region-based Dice loss for volumetric overlap, a GPU-native 3D Sobel boundary loss for edge precision, and a class-weighted focal loss for handling class imbalance. The proposed model achieved a mean Dice score of 91.30% (ET: 87.84%, TC: 92.73%, WT: 93.34%) on the test set. Notably, these results were achieved with approximately 3.7 million parameters, representing a 17–76x reduction compared to the 50–200 million parameters required by transformer-based approaches. Enhanced 3D DenseNet with CDC architecture demonstrates that the integration of gradient-sensitive convolutions, attention mechanisms, multi-scale feature extraction, and multi-level loss optimization achieves competitive segmentation performance with significantly reduced computational requirements. Full article
Show Figures

Figure 1

18 pages, 913 KB  
Review
Advances in Anti-Wrinkle Finishing Agent for Natural Fabrics
by Haoqian Luo, Haifeng Sun, Man Zhang, Jiating Wen, Mengmeng Chen, Jian Fang and Zhe Sun
Polymers 2026, 18(3), 407; https://doi.org/10.3390/polym18030407 - 4 Feb 2026
Abstract
Natural fabrics such as cotton and silk have been widely used due to their excellent properties, but their tendency to wrinkle limits their value. Traditional anti-wrinkle finishing agents suffer from issues like formaldehyde release and performance imbalance. This paper reviews the advances in [...] Read more.
Natural fabrics such as cotton and silk have been widely used due to their excellent properties, but their tendency to wrinkle limits their value. Traditional anti-wrinkle finishing agents suffer from issues like formaldehyde release and performance imbalance. This paper reviews the advances in anti-wrinkle finishing of cotton and silk fabrics, analyzing from the perspectives of environmentally friendly finishing agents, physical properties balancing, sustainable anti-wrinkle finishing, and synchronized multi-functionality. Current research have developed various environmentally friendly formaldehyde-free finishing agents, such as carboxylated polyaldehyde sucrose and α-lipoic acid, through strategies including natural product modification and organic–inorganic hybridization. The application of these agents can enable fabrics to achieve a balance between wrinkle resistance, mechanical properties, hydrophilicity, and resistance to yellowing properties. Simultaneously, they also overcome the limitations of traditional processes, endow fabric with integrated application of wrinkle resistance alongside functions such as dyeing, flame retardancy, and antibacterial properties. Moreover, optimization methods such as response surface methodology (RSM) have facilitated the precise regulation of process parameters. Future research should continue to focus on greenization, high performance, and multi-functional coordination, deepen molecular design and process optimization, and provide support for the sustainable development of the textile industry. Full article
(This article belongs to the Section Polymer Applications)
Show Figures

Figure 1

24 pages, 2787 KB  
Article
Accuracy Assessment of Exhaust Valve Geometry Reconstruction: A Comparative Study of Contact and Optical Metrology in Reverse Engineering
by Paweł Turek, Jarosław Tymczyszyn, Paweł Habrat and Jacek Misiura
Designs 2026, 10(1), 15; https://doi.org/10.3390/designs10010015 - 4 Feb 2026
Abstract
Reverse engineering (RE) is essential in the automotive and aerospace industries for reconstructing high-precision components, such as exhaust valves, when design documentation is unavailable. However, different measurement methods introduce varied errors that can affect engine performance and safety. This study presents a comparative [...] Read more.
Reverse engineering (RE) is essential in the automotive and aerospace industries for reconstructing high-precision components, such as exhaust valves, when design documentation is unavailable. However, different measurement methods introduce varied errors that can affect engine performance and safety. This study presents a comparative analysis of contact and optical measurement systems—specifically the CMM Accura II (ZEISS Group, Oberkochen, Germany), Mahr MarSurf XC 20 (Esslingen am Neckar, Germany), GOM Scan 1 (ZEISS/GOM, Braunschweig/Oberkochen, Germany) and MCA-II with an MMD×100 laser head (Nikon Metrology, Leuven, Belgium)—to assess their accuracy in reconstructing exhaust valve geometry. The research procedure involved measuring global surface deviations and critical functional parameters, including stem diameter, straightness, and seat angle. The results indicate that tactile methods (CMM and Mahr) provide significantly higher accuracy and lower dispersion than optical methods. The Mahr system was the most effective for stem precision, while the CMM was the only system to pass the seat angle tolerance requirement unambiguously. In contrast, the MCA-II laser system failed to meet the required precision–mechanical tolerances. The findings suggest that an optimal industrial strategy should adopt a hybrid methodology: utilizing rapid optical scanning (GOM) for general geometry and high-precision tactile systems (CMM, Mahr) for critical functional features. This approach can reduce total inspection time by 30–40% while ensuring technical safety and preventing catastrophic engine failures. Full article
Show Figures

Figure 1

20 pages, 2137 KB  
Article
A Partitioned Finite Difference Method for Heat Transfer with Moving Line and Plane Heat Sources
by Jun Li and Yingjun Jiang
Entropy 2026, 28(2), 179; https://doi.org/10.3390/e28020179 - 4 Feb 2026
Abstract
This study proposes an efficient numerical scheme for simulating heat transfer governed by the diffusion equation with moving singular sources. The work addresses two-dimensional problems with line sources and three-dimensional problems with plane sources, which are prevalent in irreversible thermodynamic processes. Developed within [...] Read more.
This study proposes an efficient numerical scheme for simulating heat transfer governed by the diffusion equation with moving singular sources. The work addresses two-dimensional problems with line sources and three-dimensional problems with plane sources, which are prevalent in irreversible thermodynamic processes. Developed within a finite difference framework, the method employs a partitioned discretization strategy to accurately resolve the solution singularity near the heat source—a region critical for precise local entropy production analysis. In the immediate vicinity of the source, we analytically derive and incorporate the solution’s “jump” conditions to construct specialized finite difference approximations. Away from the source, standard second-order-accurate schemes are applied. This hybrid approach yields a globally second-order convergent spatial discretization. The resulting sparse system is efficient for large-scale simulation of dissipative systems. The accuracy and efficacy of the proposed method are demonstrated through numerical examples, providing a reliable tool for the detailed study of energy distribution in non-equilibrium thermal processes. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

28 pages, 4721 KB  
Article
MAF-RecNet: A Lightweight Wheat and Corn Recognition Model Integrating Multiple Attention Mechanisms
by Hao Yao, Ji Zhu, Yancang Li, Haiming Yan, Wenzhao Feng, Luwang Niu and Ziqi Wu
Remote Sens. 2026, 18(3), 497; https://doi.org/10.3390/rs18030497 - 3 Feb 2026
Abstract
This study is grounded in the macro-context of smart agriculture and global food security. Due to population growth and climate change, precise and efficient monitoring of crop distribution and growth is vital for stable production and optimal resource use. Remote sensing combined with [...] Read more.
This study is grounded in the macro-context of smart agriculture and global food security. Due to population growth and climate change, precise and efficient monitoring of crop distribution and growth is vital for stable production and optimal resource use. Remote sensing combined with deep learning enables multi-scale agricultural monitoring from field identification to disease diagnosis. However, current models face three deployment bottlenecks: high complexity hinders operation on edge devices; scarce labeled data causes overfitting in small-sample cases; and there is insufficient generalization across regions, crops, and imaging conditions. These issues limit the large-scale adoption of intelligent agricultural technologies. To tackle them, this paper proposes a lightweight crop recognition model, MAF-RecNet. It aims to achieve high accuracy, efficiency, and strong generalization with limited data through structural optimization and attention mechanism fusion, offering a viable path for deployable intelligent monitoring systems. Built on a U-Net with a pre-trained ResNet18 backbone, MAF-RecNet integrates multiple attention mechanisms (Coordinate, External, Pyramid Split, and Efficient Channel Attention) into a hybrid attention module, improving multi-scale feature discrimination. On the Southern Hebei Farmland dataset, it achieves 87.57% mIoU and 95.42% mAP, outperforming models like SegNeXt and FastSAM, while maintaining a balance of efficiency (15.25 M parameters, 21.81 GFLOPs). The model also shows strong cross-task generalization, with mIoU scores of 80.56% (Wheat Health Status Dataset in Southern Hebei), 90.20% (Global Wheat Health Dataset), and 84.07% (Corn Health Status Dataset). Ablation studies confirm the contribution of the attention-enhanced skip connections and decoder. This study not only provides an efficient and lightweight solution for few-shot agricultural image recognition but also offers valuable insights into the design of generalizable models for complex farmland environments. It contributes to promoting the scalable and practical application of artificial intelligence technologies in precision agriculture. Full article
Show Figures

Figure 1

21 pages, 1315 KB  
Article
Ensemble Deep Learning Models for Multi-Class DNA Sequence Classification: A Comparative Study of CNN, BiLSTM, and GRU Architectures
by Elias Tabane, Ernest Mnkandla and Zenghui Wang
Appl. Sci. 2026, 16(3), 1545; https://doi.org/10.3390/app16031545 - 3 Feb 2026
Abstract
DNA sequence classification is a fundamental problem in bioinformatics, playing an indispensable role in gene annotation and disease prediction. Whereas most deep learning models, such as CNNs, BiLSTM networks, and GRUs, have been found individually optimal, each of these methods excels in modeling [...] Read more.
DNA sequence classification is a fundamental problem in bioinformatics, playing an indispensable role in gene annotation and disease prediction. Whereas most deep learning models, such as CNNs, BiLSTM networks, and GRUs, have been found individually optimal, each of these methods excels in modeling a specific aspect of sequence data: local motifs, long-range dependencies, and efficient temporal modeling of the sequences. Here, we present and evaluate an ensemble model that integrates CNN, BiLSTM, and GRU architectures via a majority voting combination scheme so that their complementary strengths can be harnessed. We trained and evaluated each standalone and the integrated model on a DNA dataset comprising 4380 sequences falling under five functional categories. The ensemble model achieved a classification accuracy of 90.6% with precision, recall, and F1 score equal to 0.91, thereby outperforming the state-of-the-art techniques by large margins. Although previous studies have tried analyzing each Deep Learning method individually for DNA classification tasks, none have attempted a systematic combination of CNN, BiLSTM, and GRU based on their ability to extract features simultaneously. The current research aims at presenting a novel method that combines these architectures based on a Majority Voting strategy and proves how their combination is better at extracting local patterns and long dependency information when compared individually. In particular, the proposed ensemble model smoothed the high recall of BiLSTM with the high precision of CNN, leading to more robust and reliable classification. The experiments involved a publicly available DNA sequence data set of 4380 sequences distributed over 5 classes. Our results emphasized the prospect of hybrid ensemble deep learning as a strong approach for complex genomic data analysis, opening ways toward more accurate and interpretable bioinformatics research. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Intelligent Computing)
Show Figures

Figure 1

22 pages, 2078 KB  
Article
A Multi-Strategy Enhanced Whale Optimization Algorithm for Long Short-Term Memory—Application to Short-Term Power Load Forecasting for Microgrid Buildings
by Lili Qu, Qingfang Teng, Hao Mai and Jing Chen
Sensors 2026, 26(3), 1003; https://doi.org/10.3390/s26031003 - 3 Feb 2026
Abstract
High-accuracy short-term electric load forecasting is essential for ensuring the security of power systems and enhancing energy efficiency. Power load sequences are characterized by strong randomness, non-stationarity, and nonlinearity over time. To improve the precision and efficiency of short-term load forecasting in microgrids, [...] Read more.
High-accuracy short-term electric load forecasting is essential for ensuring the security of power systems and enhancing energy efficiency. Power load sequences are characterized by strong randomness, non-stationarity, and nonlinearity over time. To improve the precision and efficiency of short-term load forecasting in microgrids, a hybrid predictive model combining Complementary Ensemble Empirical Mode Decomposition (CEEMD) and a multi-strategy enhanced Whale Optimization Algorithm (WOA) with Long Short-Term Memory (LSTM) neural networks has been proposed. Initially, this study employs CEEMD to decompose the short-term electric load time series. Subsequently, a multi-strategy enhanced WOA with chaotic initialization and reverse learning is introduced to enhance the search capability of model parameters and avoid entrapment in local optima. Finally, considering the distinct characteristics of each component, the multi-strategy improved WOA is utilized to optimize the LSTM model, establishing individual predictive models for each component, and the predictions are then aggregated. The proposed method’s forecasting accuracy has been validated through multiple case studies using the UC San Diego microgrid data, demonstrating its reliability and providing a solid foundation for microgrid system planning and stable operation. Full article
(This article belongs to the Section Intelligent Sensors)
50 pages, 7686 KB  
Article
A Multi-Strategy Augmented Newton–Raphson-Based Optimizer for Global Optimization Problems and Robot Path Planning
by Xiuyuan Yi and Chengpeng Li
Symmetry 2026, 18(2), 280; https://doi.org/10.3390/sym18020280 - 3 Feb 2026
Abstract
Newton–Raphson-Based Optimizer (NRBO) is a recently proposed metaheuristic that combines mathematical search rules with population-based optimization; however, it still suffers from an insufficient balance between global exploration and local exploitation, limited local refinement accuracy, and weak adaptability in complex optimization scenarios. To address [...] Read more.
Newton–Raphson-Based Optimizer (NRBO) is a recently proposed metaheuristic that combines mathematical search rules with population-based optimization; however, it still suffers from an insufficient balance between global exploration and local exploitation, limited local refinement accuracy, and weak adaptability in complex optimization scenarios. To address these limitations, this paper proposes an Improved Newton–Raphson-Based Optimizer (INRBO), which enhances the original framework through a multi-strategy augmentation mechanism. Specifically, INRBO integrates three complementary strategies: (1) an adaptive differential operator with a linearly decaying scaling factor to dynamically regulate exploration and exploitation throughout the search process; (2) a quadratic interpolation strategy that exploits high-quality individuals to improve local search directionality and precision; and (3) an elitist population genetic strategy that preserves superior solution characteristics while maintaining population diversity and preventing premature convergence. The performance of INRBO is systematically evaluated on the CEC2017 benchmark suite under multiple dimensions and compared with several state-of-the-art metaheuristic algorithms. Experimental results demonstrate that INRBO achieves superior optimization accuracy, convergence efficiency, and robustness across unimodal, multimodal, hybrid, and composite functions, which is further confirmed by statistical significance tests. In addition, INRBO is applied to mobile robot path planning in grid-based environments of different scales, where it consistently generates shorter, smoother, and safer paths than competing algorithms. Overall, the proposed INRBO provides an effective and robust optimization framework for global continuous optimization problems and real-world engineering applications, demonstrating both strong theoretical value and practical applicability. Full article
(This article belongs to the Special Issue Symmetry in Numerical Analysis and Applied Mathematics)
39 pages, 2492 KB  
Systematic Review
Cloud, Edge, and Digital Twin Architectures for Condition Monitoring of Computer Numerical Control Machine Tools: A Systematic Review
by Mukhtar Fatihu Hamza
Information 2026, 17(2), 153; https://doi.org/10.3390/info17020153 - 3 Feb 2026
Abstract
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture [...] Read more.
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture of data, and offline interpretation, are proving too small to handle current machining processes. Being limited in their scale, having limited computational power, and not being responsive in real-time, they do not fit well in a dynamic and data-intensive production environment. Recent progress in the Industrial Internet of Things (IIoT), cloud computing, and edge intelligence has led to a push into distributed monitoring architectures capable of obtaining, processing, and interpreting large amounts of heterogeneous machining data. Such innovations have facilitated more adaptive decision-making approaches, which have helped in supporting predictive maintenance, enhancing machining stability, tool lifespan, and data-driven optimization in manufacturing businesses. A structured literature search was conducted across major scientific databases, and eligible studies were synthesized qualitatively. This systematic review synthesizes over 180 peer-reviewed studies found in major scientific databases, using specific inclusion criteria and a PRISMA-guided screening process. It provides a comprehensive look at sensor technologies, data acquisition systems, cloud–edge–IoT frameworks, and digital twin implementations from an architectural perspective. At the same time, it identifies ongoing challenges related to industrial scalability, standardization, and the maturity of deployment. The combination of cloud platforms and edge intelligence is of particular interest, with emphasis placed on how the two ensure a balance in the computational load and latency, and improve system reliability. The review is a synthesis of the major advances associated with sensor technologies, data collection approaches, machine operations, machine learning, deep learning methods, and digital twins. The paper concludes with what can and cannot be performed to date by providing a comparative analysis of what is known about this topic and the reported industrial case applications. The main issues, such as the inconsistency of data, the lack of standardization, cyber threats, and old system integration, are critically analyzed. Lastly, new research directions are touched upon, including hybrid cloud–edge intelligence, advanced AI models, and adaptive multisensory fusion, which is oriented to autonomous and self-evolving CNC monitoring systems in line with the Industry 4.0 and Industry 5.0 paradigms. The review process was made transparent and repeatable by using a PRISMA-guided approach to qualitative synthesis and literature screening. Full article
Show Figures

Figure 1

25 pages, 8031 KB  
Article
A Dual-Optimized Hybrid Deep Learning Framework with RIME-VMD and TCN-BiGRU-SA for Short-Term Wind Power Prediction
by Zhong Wang, Kefei Zhang, Xun Ai, Sheng Liu and Tianbao Zhang
Appl. Sci. 2026, 16(3), 1531; https://doi.org/10.3390/app16031531 - 3 Feb 2026
Abstract
Precise short-term forecasting of wind power generation is indispensable for ensuring the security and economic efficiency of power grid operations. Nevertheless, the inherent non-stationarity and stochastic nature of wind power series present significant challenges for prediction accuracy. To address these issues, this paper [...] Read more.
Precise short-term forecasting of wind power generation is indispensable for ensuring the security and economic efficiency of power grid operations. Nevertheless, the inherent non-stationarity and stochastic nature of wind power series present significant challenges for prediction accuracy. To address these issues, this paper proposes a dual-optimized hybrid deep learning framework combining Spearman correlation analysis, RIME-VMD, and TCN-BiGRU-SA. First, Spearman correlation analysis is employed to screen meteorological factors, eliminating redundant features and reducing model complexity. Second, an adaptive Variational Mode Decomposition (VMD) strategy, optimized by the RIME algorithm based on Minimum Envelope Entropy, decomposes the non-stationary wind power series into stable intrinsic mode functions (IMFs). Third, a hybrid predictor integrating Temporal Convolutional Network (TCN), Bidirectional Gated Recurrent Unit (BiGRU), and Self-Attention (SA) mechanisms is constructed to capture both local trends and long-term temporal dependencies. Furthermore, the RIME algorithm is utilized again to optimize the hyperparameters of the deep learning predictor to avoid local optima. The proposed framework is validated using full-year datasets from two distinct wind farms in Xinjiang and Gansu, China. Experimental results demonstrate that the proposed model achieves a Root Mean Square Error (RMSE) of 7.5340 MW on the primary dataset, significantly outperforming mainstream baseline models. The multi-dataset verification confirms the model’s superior prediction accuracy, robustness against seasonal variations, and strong generalization capability. Full article
Show Figures

Figure 1

22 pages, 1821 KB  
Review
Boron Neutron Capture Therapy: A Technology-Driven Renaissance
by Dandan Zheng, Guang Han, Olga Dona Maria Lemus, Alexander Podgorsak, Matthew Webster, Fiona Li, Yuwei Zhou, Hyunuk Jung and Jihyung Yoon
Cancers 2026, 18(3), 498; https://doi.org/10.3390/cancers18030498 - 3 Feb 2026
Abstract
Boron neutron capture therapy (BNCT) is experiencing a global resurgence driven by advances in boron pharmacology, accelerator-based neutron sources, and molecular imaging-guided theranostics. BNCT produces high linear energy transfer particles with micrometer-range energy deposition, enabling cell-selective irradiation confined to boron-enriched tumor cells in [...] Read more.
Boron neutron capture therapy (BNCT) is experiencing a global resurgence driven by advances in boron pharmacology, accelerator-based neutron sources, and molecular imaging-guided theranostics. BNCT produces high linear energy transfer particles with micrometer-range energy deposition, enabling cell-selective irradiation confined to boron-enriched tumor cells in a geometrically targeted region by the neutron beam. This mechanism offers the potential for exceptionally high therapeutic ratios, provided two core requirements are met: sufficient differential tumor uptake of 10B and a neutron beam with appropriate energy and penetration. After early clinical attempts in the mid-20th century were hindered by inadequate boron agents and reactor-based neutron beams, recent technological breakthroughs have made BNCT clinically viable. The development of hospital-compatible accelerator neutron sources, next-generation boron delivery systems (such as receptor-targeted compounds and nanoparticles), advanced theranostic approaches (such as 18F-BPA positron emission tomography and boron-sensitive magnetic resonance imaging), and AI-driven biodistribution modeling now support personalized treatment planning and patient selection. These innovations have catalyzed modern clinical implementation, exemplified by Japan’s regulatory approval of BNCT for recurrent head and neck cancer and the rapid expansion of clinical programs across Asia, Europe, and South America. Building on these foundations, BNCT has transitioned from a predominantly academic experimental modality into an increasingly commercialized and industrially supported therapeutic platform. The emergence of dedicated BNCT companies, international collaborations between accelerator manufacturers and hospitals, and pharmaceutical development pipelines for next-generation boron carriers has accelerated clinical translation. Moreover, BNCT now occupies a unique position among radiation modalities due to its hybrid nature, namely combining the biological targeting of radiopharmaceutical therapy with the external-beam controllability of radiotherapy, thereby offering new therapeutic opportunities where competitive approaches fall short. Emerging evidence suggests therapeutic promise in glioblastoma, recurrent head and neck cancers, melanoma, meningioma, lung cancer, sarcomas, and other difficult-to-treat malignancies. Looking ahead, continued innovation in compact neutron source engineering, boron nanocarriers, multimodal theranostics, microdosimetry-guided treatment planning, and combination strategies with systemic therapies such as immunotherapy will be essential for optimizing outcomes. Together, these converging developments position BNCT as a biologically targeted and potentially transformative modality in the era of precision oncology. Full article
(This article belongs to the Special Issue New Approaches in Radiotherapy for Cancer)
Show Figures

Figure 1

19 pages, 1142 KB  
Article
Risk Assessment of Dibutyl Phthalate (DBP) and Bis(2-Ethylhexyl) Phthalate (DEHP) in Hot Pot Bases with a Hybrid Modeling Approach
by Xiangyu Bian, Siyu Huang, Dongya Chen, Depeng Jiang, Daoyuan Yang, Yingzi Zhao, Zhujun Liu, Shiqi Chen, Yan Song, Haixia Sui and Jinfang Sun
Toxics 2026, 14(2), 150; https://doi.org/10.3390/toxics14020150 - 2 Feb 2026
Viewed by 35
Abstract
(1) Background: Hot pot bases are susceptible to phthalate (PAE) contamination due to their high lipid content. Standard risk models often fail to capture extreme values, leading to biased exposure estimates. This study characterized dibutyl phthalate (DBP) and bis(2-ethylhexyl) phthalate (DEHP) contamination using [...] Read more.
(1) Background: Hot pot bases are susceptible to phthalate (PAE) contamination due to their high lipid content. Standard risk models often fail to capture extreme values, leading to biased exposure estimates. This study characterized dibutyl phthalate (DBP) and bis(2-ethylhexyl) phthalate (DEHP) contamination using a hybrid modeling framework to ensure precise risk profiling. (2) Methods: A total of 91 samples were analyzed via GC-MS. Concentration data were fitted using traditional parametric, extreme value mixture (EVMM), and finite mixture models. Probabilistic dietary risks were assessed for Chinese demographic groups using 10,000-iteration Monte Carlo simulations. (3) Results: DEHP (detection rate: 55%) and DBP (32%) were best modeled by a two-component Gamma mixture and a Lognormal–Generalized Pareto distribution, respectively. These advanced models significantly outperformed conventional distributions in capturing upper-tail extremes. Crucially, all hazard quotients (HQs) remained below the safety threshold of 1, indicating acceptable risk, although children aged 7–13 exhibited the highest calculated risk (Max DEHP HQ = 0.68). (4) Conclusions: Although current exposure levels are within safe limits, the heavy-tailed distributions identify potential sporadic high-exposure events that traditional models overlook, specifically highlighting the relative vulnerability of children aged 7–13. This study validates that hybrid statistical approaches offer superior precision for analyzing skewed contamination data. Consequently, these findings provide a critical scientific basis for refining regulatory monitoring and implementing targeted source-tracking measures to mitigate long-tail food safety risks. Full article
Show Figures

Figure 1

11 pages, 2265 KB  
Proceeding Paper
Retrieving Canopy Chlorophyll Content from Sentinel-2 Imagery Using Google Earth Engine
by Tarun Teja Kondraju, Rabi N. Sahoo, Rajan G. Rejith, Amrita Bhandari, Rajeev Ranjan, Devanakonda V. S. C. Reddy and Selvaprakash Ramalingam
Biol. Life Sci. Forum 2025, 54(1), 13; https://doi.org/10.3390/blsf2025054013 - 2 Feb 2026
Viewed by 27
Abstract
Google Earth Engine (GEE) has revolutionised remote sensing. The GEE cloud platform lets users quickly analyse large satellite imagery datasets with custom programmes, enhancing global-scale analysis. Crop condition monitoring using GEE would greatly help in decision-making and precision agriculture. Estimating canopy chlorophyll content [...] Read more.
Google Earth Engine (GEE) has revolutionised remote sensing. The GEE cloud platform lets users quickly analyse large satellite imagery datasets with custom programmes, enhancing global-scale analysis. Crop condition monitoring using GEE would greatly help in decision-making and precision agriculture. Estimating canopy chlorophyll content (CCC) is an effective way to monitor crops using remote sensing because leaf chlorophyll is a key indicator. A hybrid model that combines radiative transfer models (RTMs), such as PROSAIL, with Gaussian Process Regression (GPR) can effectively estimate crop biophysical parameters using remote sensing images. GPR has proven to be one of the best methods for this purpose. This study aimed to develop a hybrid model to estimate CCC from S2 imagery and transfer it to the GEE platform for efficient data processing. In this work, the CCC (g/cm2) data from the S2 biophysical processor toolbox for the S2 imagery of the ICAR-Indian Agricultural Research Institute (IARI) on 23 February 2023 were used as observation data to train the hybrid algorithm. The hybrid model was successfully validated against the 155 input data with an R2 of 0.94, RMSE of 10.02, and NRMSE of 5.04%. The model was integrated into GEE to successfully generate a CCC-estimated map of IARI using S2 imagery from 23 February 2023. An R2 value of 0.96 was observed when GEE-estimated CCC values were compared against CCC values estimated locally. This establishes that the GEE-based CCC estimation with the PROSAIL + GPR hybrid model is an effective and accurate method for monitoring vegetation and crop conditions over large areas and extended periods. Full article
(This article belongs to the Proceedings of The 3rd International Online Conference on Agriculture)
Show Figures

Figure 1

19 pages, 4660 KB  
Article
Analysis of Grounding Schemes and Machine Learning-Based Fault Detection in Hybrid AC/DC Distribution System
by Zeeshan Haider, Shehzad Alamgir, Muhammad Ali, S. Jarjees Ul Hassan and Arif Mehdi
Electricity 2026, 7(1), 11; https://doi.org/10.3390/electricity7010011 - 2 Feb 2026
Viewed by 33
Abstract
The increasing integration of hybrid AC/DC networks in modern power systems introduces new challenges in fault detection and grounding scheme design, necessitating advanced techniques for stable and reliable operation. This paper investigates fault detection and grounding schemes in hybrid AC/DC networks using a [...] Read more.
The increasing integration of hybrid AC/DC networks in modern power systems introduces new challenges in fault detection and grounding scheme design, necessitating advanced techniques for stable and reliable operation. This paper investigates fault detection and grounding schemes in hybrid AC/DC networks using a machine learning (ML) approach to enhance accuracy, speed, and adaptability. Traditional methods often struggle with the dynamic and complex nature of hybrid systems, leading to delayed or incorrect fault identification. To address this, we propose a data-driven ML framework that leverages features such as voltage, current, and frequency characteristics for real-time detection and classification of faults. Additionally, the effectiveness of various grounding schemes is analyzed under different fault conditions to ensure system stability and safety. Simulation results on a hybrid AC/DC test network demonstrate the superior performance of the proposed ML-based fault detection method compared to conventional techniques, achieving high precision, recall, and robustness against noise and varying operating conditions. The findings highlight the potential of ML in improving fault management and grounding strategy optimization for future hybrid power grids. Full article
Show Figures

Figure 1

Back to TopTop