Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,275)

Search Parameters:
Keywords = field gradient

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4342 KB  
Article
Energy–Latency–Accuracy Trade-off in UAV-Assisted VECNs: A Robust Optimization Approach Under Channel Uncertainty
by Tiannuo Liu, Menghan Wu, Hanjun Yu, Yixin He, Dawei Wang, Li Li and Hongbo Zhao
Drones 2026, 10(2), 86; https://doi.org/10.3390/drones10020086 (registering DOI) - 26 Jan 2026
Abstract
Federated learning (FL)-based vehicular edge computing networks (VECNs) are emerging as a key enabler of intelligent transportation systems, as their privacy-preserving and distributed architecture can safeguard vehicle data while reducing latency and energy consumption. However, conventional roadside units face processing bottlenecks in dense [...] Read more.
Federated learning (FL)-based vehicular edge computing networks (VECNs) are emerging as a key enabler of intelligent transportation systems, as their privacy-preserving and distributed architecture can safeguard vehicle data while reducing latency and energy consumption. However, conventional roadside units face processing bottlenecks in dense traffic and at the network edge, motivating the adoption of unmanned aerial vehicle (UAV)-assisted VECNs. To address this challenge, this paper proposes a UAV-assisted VECN framework with FL, aiming to improve model accuracy while minimizing latency and energy consumption during computation and transmission. Specifically, a reputation-based client selection mechanism is introduced to enhance the accuracy and reliability of federated aggregation. Furthermore, to address the channel dynamics induced by high vehicle mobility, we design a robust reinforcement learning-based resource allocation scheme. In particular, an asynchronous parallel deep deterministic policy gradient (APDDPG) algorithm is developed to adaptively allocate computation and communication resources in response to real-time channel states and task demands. To ensure consistency with real vehicular communication environments, field experiments were conducted and the obtained measurements were used as simulation parameters to analyze the proposed algorithm. Compared with state-of-the-art algorithms, the developed APDDPG algorithm achieves 20% faster convergence, 9% lower energy consumption, a FL accuracy of 95.8%, and the most robust standard deviation under varying channel conditions. Full article
(This article belongs to the Special Issue Low-Latency Communication for Real-Time UAV Applications)
20 pages, 981 KB  
Article
Wrapped Cauchy Robust Approach to the Circular-Circular Regression Model
by Adnan Karaibrahimoglu, Mutlu Altuntas and Hani Hamdan
Mathematics 2026, 14(3), 426; https://doi.org/10.3390/math14030426 - 26 Jan 2026
Abstract
Circular–circular regression models are widely used to investigate relationships between angular variables in various applied fields, including biostatistics. The classical von Mises (vM) circular–circular regression model, however, is known to be sensitive to outliers due to its light-tailed error structure. In this study, [...] Read more.
Circular–circular regression models are widely used to investigate relationships between angular variables in various applied fields, including biostatistics. The classical von Mises (vM) circular–circular regression model, however, is known to be sensitive to outliers due to its light-tailed error structure. In this study, we investigate the wrapped Cauchy (WC) circular–circular regression model as a robust alternative to the vM-based approach for analyzing circular data contaminated by outliers. Parameter estimation is performed via maximum likelihood (ML) using a modern constrained gradient-based optimization algorithm, namely the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm with box constraints (L-BFGS-B), allowing for stable estimation under natural parameter bounds. Extensive simulation studies demonstrate that, under contaminated settings, the WC model provides substantially more stable parameter estimates than the vM model, yielding markedly lower mean squared error and variability, particularly for high concentration regimes and directional outliers. The robustness advantage of the WC model is further illustrated through a real biostatistical application involving the circular relationship between the months of diagnosis and surgical intervention in gastric cancer patients. Overall, the results highlight the practical benefits of WC-based circular–circular regression for robust inference in the presence of outliers. Full article
(This article belongs to the Special Issue New Trends in Big Data Analysis, Optimization, and Algorithms)
34 pages, 7482 KB  
Article
Investigating Unsafe Pedestrian Behavior at Urban Road Midblock Crossings Using Machine Learning: Lessons from Alexandria, Egypt
by Ahmed Mahmoud Darwish, Sherif Shokry, Maged Zagow, Marwa Elbany, Ali Qabur, Talal Obaid Alshammari, Ahmed Elkafoury and Mohamed Shaaban Alfiqi
Buildings 2026, 16(3), 505; https://doi.org/10.3390/buildings16030505 - 26 Jan 2026
Abstract
Examining pedestrian crossing violations at high-risk road midblock crossings has become essential, particularly in high-speed corridors, as a result of accidents at crossings resulting in fatalities. Hence, this article investigates such behavior in Alexandria, Egypt, as a credible case study in a developing [...] Read more.
Examining pedestrian crossing violations at high-risk road midblock crossings has become essential, particularly in high-speed corridors, as a result of accidents at crossings resulting in fatalities. Hence, this article investigates such behavior in Alexandria, Egypt, as a credible case study in a developing country. According to our research methodology, a comprehensive dataset of over 2400 field-observed video recordings was used for real-life data collection. Machine learning (ML) models, such as CatBoost and gradient boosting (GB), were employed to predict crossing decisions. The models showed that risky behavior is strongly influenced by waiting time, crossing time, and the number of crossing attempts. The highest predictive performance was achieved by CatBoost and gradient boosting, indicating strong interpersonal influence within small groups engaging in unsafe road-crossing behavior. In the same context, the Shapley additive explanation (SHAP) values for these variables were 3, 2, and 0.60, respectively. Subsequently, based on SHAP sensitivity analysis, the results show that pedestrian crossing time (s) had the highest tendency to push the model towards class 1 (e.g., crossing illegally), while total time (s) and age group (40–60 Y) had a significant negative influence on model prediction converging to class 0 (e.g., crossing illegally). The results also showed that shorter exposure times increase the likelihood of crossing illegally. This research work is among the few studies that employ a behavior-based approach to understanding pedestrian behavior at midblock crossings. This study offers actionable insights and valuable information for urban designers and transportation planners when considering the design of midblock crossings. Full article
29 pages, 17585 KB  
Article
An Adaptive Difference Policy Gradient Method for Cooperative Multi-USV Pursuit in Multi-Agent Reinforcement Learning
by Zhen Du, Shenhua Yang and Weijun Wang
J. Mar. Sci. Eng. 2026, 14(3), 252; https://doi.org/10.3390/jmse14030252 - 25 Jan 2026
Abstract
In constrained waters, multi-USV cooperative encirclement of highly maneuverable targets is strongly affected by partial observability as well as obstacle and boundary constraints, posing substantial challenges to stable cooperative control. Existing deep reinforcement learning methods often suffer from low exploration efficiency, pronounced policy [...] Read more.
In constrained waters, multi-USV cooperative encirclement of highly maneuverable targets is strongly affected by partial observability as well as obstacle and boundary constraints, posing substantial challenges to stable cooperative control. Existing deep reinforcement learning methods often suffer from low exploration efficiency, pronounced policy oscillations, and difficulties in maintaining the desired encirclement geometry in complex environments. To address these challenges, this paper proposes an adaptive difference-based multi-agent policy gradient method (MAADPG) under the centralized training and decentralized execution (CTDE) paradigm. MAADPG deeply integrates potential-field-inspired geometric guidance with a multi-agent deterministic policy gradient framework. Specifically, a guidance module generates geometrically interpretable candidate actions for each pursuer. Moreover, a difference-driven adaptive action adoption mechanism is introduced at the behavior policy execution level, where guided actions and policy actions are locally compared and the guided action is adopted only when it yields a significantly positive return difference. This design enables MAADPG to select higher-quality interaction actions, improve exploration efficiency, and enhance policy stability. Experimental results demonstrate that MAADPG consistently achieves fast convergence, stable coordination, and reliable encirclement formation across representative pursuit–encirclement scenarios, including obstacle-free, sparsely obstructed, and densely obstructed environments, thereby validating its applicability and stability for multi-USV encirclement tasks in constrained waters. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

28 pages, 16157 KB  
Article
A Robust Skeletonization Method for High-Density Fringe Patterns in Holographic Interferometry Based on Parametric Modeling and Strip Integration
by Sergey Lychev and Alexander Digilov
J. Imaging 2026, 12(2), 54; https://doi.org/10.3390/jimaging12020054 - 24 Jan 2026
Viewed by 38
Abstract
Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under [...] Read more.
Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under these conditions, producing fragmented and unreliable fringe contours. This paper presents a novel skeletonization procedure that simultaneously addresses three fundamental challenges: (1) topology preservation—by representing the fringe family within a physics-informed, finite-dimensional parametric subspace (e.g., Fourier-based contours), ensuring global smoothness, connectivity, and correct nesting of each fringe; (2) extreme noise robustness—through a robust strip integration functional that replaces noisy point sampling with Gaussian-weighted intensity averaging across a narrow strip, effectively suppressing speckle while yielding a smooth objective function suitable for gradient-based optimization; and (3) sub-pixel accuracy without phase extraction—leveraging continuous bicubic interpolation within a recursive quasi-optimization framework that exploits fringe similarity for precise and stable contour localization. The method’s performance is quantitatively validated on synthetic interferograms with controlled noise, demonstrating significantly lower error compared to baseline techniques. Practical utility is confirmed by successful processing of a real interferogram of a bent plate containing over 100 fringes, enabling precise displacement field reconstruction that closely matches independent theoretical modeling. The proposed procedure provides a reliable tool for processing challenging interferograms where traditional methods fail to deliver satisfactory results. Full article
(This article belongs to the Special Issue Image Segmentation: Trends and Challenges)
16 pages, 3623 KB  
Article
Dairy Farm Streptococcus agalactiae in a Region of Northeast Brazil: Genetic Diversity, Resistome, and Virulome
by Vinicius Pietta Perez, Fernanda Zani Manieri, Luciana Roberta Torini, Carlos Gabriel Andrade Barbosa, Fabio Campioni, Fabiana Caroline Zempulski Volpato, Eloíza Helena Campana, Artur Cezar de Carvalho Fernandes, Afonso Luís Barth, Eduardo Sergio Soares Sousa, Celso Jose Bruno de Oliveira and Ilana Lopes Baratella da Cunha Camargo
Pathogens 2026, 15(2), 128; https://doi.org/10.3390/pathogens15020128 - 24 Jan 2026
Viewed by 99
Abstract
Streptococcus agalactiae is a major cause of bovine mastitis, which affects the quality and yield of milk. The main strategy for controlling this pathogen on dairy farms is the use of antibiotics. This study investigated the clonality, serotype distribution, antimicrobial susceptibility, and presence [...] Read more.
Streptococcus agalactiae is a major cause of bovine mastitis, which affects the quality and yield of milk. The main strategy for controlling this pathogen on dairy farms is the use of antibiotics. This study investigated the clonality, serotype distribution, antimicrobial susceptibility, and presence of resistance and virulence genes in 46 S. agalactiae isolates obtained from raw bovine milk in northeastern Brazil. Capsular types were determined using multiplex PCR and antibiotic susceptibility profiles were determined using disc diffusion or the gradient strip method. Clonal diversity was evaluated via pulsed-field gel electrophoresis. Eight isolates were sequenced using short- and long-read methods. There was high overall genetic diversity, whereas the resistance and virulence profiles were largely homogeneous within herds. Tetracycline and macrolide resistance was frequent and mediated by tetO and ermB and less frequently by tetM. Genome analysis demonstrated that resistance genes are present in mobile genetic elements that are also present in human isolates, and phylogenomic analyses identified ST-103 as the predominant and multi-host-adapted lineage, whereas ST-91 clustered with the bovine-adapted lineage. These findings expand the molecular epidemiology of S. agalactiae in dairy farms of a region in northeast Brazil and highlight the importance of surveillance strategies for guiding mastitis control and mitigating the spread of antimicrobial resistance. Full article
Show Figures

Figure 1

19 pages, 1658 KB  
Article
Unraveling the Underlying Factors of Cognitive Failures in Construction Workers: A Safety-Centric Exploration
by Muhammad Arsalan Khan, Muhammad Asghar, Shiraz Ahmed, Muhammad Abu Bakar Tariq, Mohammad Noman Aziz and Rafiq M. Choudhry
Buildings 2026, 16(3), 476; https://doi.org/10.3390/buildings16030476 - 23 Jan 2026
Viewed by 52
Abstract
Unsafe behaviors at construction sites often originate from cognitive failures such as lapses in memory and attention. This study proposes a novel, hybrid framework to systematically identify and predict the key contributors of cognitive failures among construction workers. First, a detailed literature review [...] Read more.
Unsafe behaviors at construction sites often originate from cognitive failures such as lapses in memory and attention. This study proposes a novel, hybrid framework to systematically identify and predict the key contributors of cognitive failures among construction workers. First, a detailed literature review was conducted to identify 30 candidate factors related to cognitive failures and unsafe behaviors at construction sites. Thereafter, 10 construction safety experts ranked these factors to prioritize the most influential variables. A questionnaire was then developed and field surveys were conducted across various construction sites. A total of 500 valid responses were collected from construction workers involved in residential, highway, and dam projects in Pakistan. The collected data was first analyzed using conventional statistical analysis techniques like correlation analysis followed by multiple linear and binary logistic regression to estimate factor effects on cognitive failure outcomes. Thereafter, machine-learning models (including support vector machine, random forest, and gradient boosting) were implemented to enable a more robust prediction of cognitive failures. The findings consistently identified fatigue and stress as the strongest predictors of cognitive failures. These results extend unsafe behavior frameworks by highlighting the significant factors influencing cognitive failures. Moreover, the findings also imply the importance of targeted interventions, including fatigue management, structured training, and evidence-based stress reduction, to improve safety conditions at construction sites. Full article
(This article belongs to the Special Issue Occupational Safety and Health in Building Construction Project)
Show Figures

Figure 1

21 pages, 9353 KB  
Article
YOLOv10n-Based Peanut Leaf Spot Detection Model via Multi-Dimensional Feature Enhancement and Geometry-Aware Loss
by Yongpeng Liang, Lei Zhao, Wenxin Zhao, Shuo Xu, Haowei Zheng and Zhaona Wang
Appl. Sci. 2026, 16(3), 1162; https://doi.org/10.3390/app16031162 - 23 Jan 2026
Viewed by 79
Abstract
Precise identification of early peanut leaf spot is strategically significant for safeguarding oilseed supplies and reducing pesticide reliance. However, general-purpose detectors face severe domain adaptation bottlenecks in unstructured field environments due to small feature dissipation, physical occlusion, and class imbalance. To address this, [...] Read more.
Precise identification of early peanut leaf spot is strategically significant for safeguarding oilseed supplies and reducing pesticide reliance. However, general-purpose detectors face severe domain adaptation bottlenecks in unstructured field environments due to small feature dissipation, physical occlusion, and class imbalance. To address this, this study constructs a dataset spanning two phenological cycles and proposes POD-YOLO, a physics-aware and dynamics-optimized lightweight framework. Anchored on the YOLOv10n architecture and adhering to a “data-centric” philosophy, the framework optimizes the parameter convergence path via a synergistic “Augmentation-Loss-Optimization” mechanism: (1) Input Stage: A Physical Domain Reconstruction (PDR) module is introduced to simulate physical occlusion, blocking shortcut learning and constructing a robust feature space; (2) Loss Stage: A Loss Manifold Reshaping (LMR) mechanism is established utilizing dual-branch constraints to suppress background gradients and enhance small target localization; and (3) Optimization Stage: A Decoupled Dynamic Scheduling (DDS) strategy is implemented, integrating AdamW with cosine annealing to ensure smooth convergence on small-sample data. Experimental results demonstrate that POD-YOLO achieves a 9.7% precision gain over the baseline and 83.08% recall, all while maintaining a low computational cost of 8.4 GFLOPs. This study validates the feasibility of exploiting the potential of lightweight architectures through optimization dynamics, offering an efficient paradigm for edge-based intelligent plant protection. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

31 pages, 27773 KB  
Article
Machine Learning Techniques for Modelling the Water Quality of Coastal Lagoons
by Juan Marcos Lorente-González, José Palma, Fernando Jiménez, Concepción Marcos and Angel Pérez-Ruzafa
Water 2026, 18(3), 297; https://doi.org/10.3390/w18030297 - 23 Jan 2026
Viewed by 168
Abstract
This study evaluates the performance of several machine learning models in predicting dissolved oxygen concentration in the surface layer of the Mar Menor coastal lagoon. In recent years, this ecosystem has suffered a continuous process of eutrophication and episodes of hypoxia, mainly due [...] Read more.
This study evaluates the performance of several machine learning models in predicting dissolved oxygen concentration in the surface layer of the Mar Menor coastal lagoon. In recent years, this ecosystem has suffered a continuous process of eutrophication and episodes of hypoxia, mainly due to continuous influx of nutrients from agricultural activities, causing severe water quality deterioration and mortality of local flora and fauna. In this context, monitoring the ecological status of the Mar Menor and its watershed is essential to understand the environmental dynamics that trigger these dystrophic crises. Using field data, this study evaluates the performance of eight predictive modelling approaches, encompassing regularised linear regression methods (Ridge, Lasso, and Elastic Net), instance-based learning (k-nearest neighbours, KNN), kernel-based regression (support vector regression with a radial basis function kernel, SVR-RBF), and tree-based ensemble techniques (Random Forest, Regularised Random Forest, and XGBoost), under multiple experimental settings involving spatial variability and varying time lags applied to physicochemical and meteorological predictors. The results showed that incorporating time lags of approximately two weeks in physicochemical variables markedly improves the models’ ability to generalise to new data. Tree-based regression models achieved the best overall performance, with eXtreme Gradient Boosting providing the highest evaluation metrics. Finally, analysing predictions by sampling point reveals spatial patterns, underscoring the influence of local conditions on prediction quality and the need to consider both spatial structure and temporal inertia when modelling complex coastal lagoon systems. Full article
Show Figures

Figure 1

27 pages, 3203 KB  
Article
Machine Learning and Physics-Informed Neural Networks for Thermal Behavior Prediction in Porous TPMS Metals
by Mohammed Yahya and Mohamad Ziad Saghir
Fluids 2026, 11(2), 29; https://doi.org/10.3390/fluids11020029 - 23 Jan 2026
Viewed by 80
Abstract
Triply periodic minimal surface (TPMS) structures provide high surface area to volume ratios and tunable conduction pathways, but predicting their thermal behavior across different metallic materials remains challenging because multi-material experimentation is costly and full-scale simulations require extremely fine meshes to resolve the [...] Read more.
Triply periodic minimal surface (TPMS) structures provide high surface area to volume ratios and tunable conduction pathways, but predicting their thermal behavior across different metallic materials remains challenging because multi-material experimentation is costly and full-scale simulations require extremely fine meshes to resolve the complex geometry. This study develops a physics-informed neural network (PINN) that reconstructs steady-state temperature fields in TPMS Gyroid structures using only two experimentally measured materials, Aluminum and Silver, which were tested under identical heat flux and flow conditions. The model incorporates conductivity ratio physics, Fourier-based thermal scaling, and complete spatial temperature profiles directly into the learning process to maintain physical consistency. Validation using the complete Aluminum and Silver datasets confirms excellent agreement for Aluminum and strong accuracy for Silver despite its larger temperature gradients. Once trained, the PINN can generalize the learned behavior to nine additional metals using only their conductivity ratios, without requiring new experiments or numerical simulations. A detailed heat transfer analysis is also performed for Magnesium, a lightweight material that is increasingly considered for thermal management applications. Since no published TPMS measurements for Magnesium currently exist, the predicted Nusselt numbers obtained from the PINN-generated temperature fields represent the first model-based evaluation of its convective performance. The results demonstrate that the proposed PINN provides an efficient, accurate, and scalable surrogate model for predicting thermal behavior across multiple metallic TPMS structures and supports the design and selection of materials for advanced porous heat technologies. Full article
Show Figures

Figure 1

23 pages, 3790 KB  
Article
AI-Powered Thermal Fingerprinting: Predicting PLA Tensile Strength Through Schlieren Imaging
by Mason Corey, Kyle Weber and Babak Eslami
Polymers 2026, 18(3), 307; https://doi.org/10.3390/polym18030307 - 23 Jan 2026
Viewed by 154
Abstract
Fused deposition modeling (FDM) suffers from unpredictable mechanical properties in nominally identical prints. Current quality assurance relies on destructive testing or expensive post-process inspection, while existing machine learning approaches focus primarily on printing parameters rather than real-time thermal environments. The objective of this [...] Read more.
Fused deposition modeling (FDM) suffers from unpredictable mechanical properties in nominally identical prints. Current quality assurance relies on destructive testing or expensive post-process inspection, while existing machine learning approaches focus primarily on printing parameters rather than real-time thermal environments. The objective of this proof-of-concept study is to develop a low-cost, non-destructive framework for predicting tensile strength during FDM printing by directly measuring convective thermal gradients surrounding the print. To accomplish this, we introduce thermal fingerprinting: a novel non-destructive technique that combines Background-Oriented Schlieren (BOS) imaging with machine learning to predict tensile strength during printing. We captured thermal gradient fields surrounding PLA specimens (n = 30) under six controlled cooling conditions using consumer-grade equipment (Nikon D750 camera, household hairdryers) to demonstrate low-cost implementation feasibility. BOS imaging was performed at nine critical layers during printing, generating thermal gradient data that was processed into features for analysis. Our initial dual-model ensemble system successfully classified cooling conditions (100%) and showed promising correlations with tensile strength (initial 80/20 train–test validation: R2 = 0.808, MAE = 0.279 MPa). However, more rigorous cross-validation revealed the need for larger datasets to achieve robust generalization (five-fold cross-validation R2 = 0.301, MAE = 0.509 MPa), highlighting typical challenges in small-sample machine learning applications. This work represents the first successful application of Schlieren imaging to polymer additive manufacturing and establishes a methodological framework for real-time quality prediction. The demonstrated framework is directly applicable to real-time, non-contact quality assurance in FDM systems, enabling on-the-fly identification of mechanically unreliable prints in laboratory, industrial, and distributed manufacturing environments without interrupting production. Full article
(This article belongs to the Special Issue 3D/4D Printing of Polymers: Recent Advances and Applications)
24 pages, 5280 KB  
Article
MA-DeepLabV3+: A Lightweight Semantic Segmentation Model for Jixin Fruit Maturity Recognition
by Leilei Deng, Jiyu Xu, Di Fang and Qi Hou
AgriEngineering 2026, 8(2), 40; https://doi.org/10.3390/agriengineering8020040 - 23 Jan 2026
Viewed by 94
Abstract
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels [...] Read more.
Jixin fruit (Malus domesticaJixin’) is a high-value specialty fruit of significant economic importance in northeastern and northwestern China. Automatic recognition of fruit maturity is a critical prerequisite for intelligent harvesting. However, challenges inherent to field environments—including heterogeneous ripeness levels among fruits on the same plant, gradual color transitions during maturation that result in ambiguous boundaries, and occlusion by branches and foliage—render traditional image recognition methods inadequate for simultaneously achieving high recognition accuracy and computational efficiency. Although existing deep learning models can improve recognition accuracy, their substantial computational demands and high hardware requirements preclude deployment on resource-constrained embedded devices such as harvesting robots. To achieve the rapid and accurate identification of Jixin fruit maturity, this study proposes Multi-Attention DeepLabV3+ (MA-DeepLabV3+), a streamlined semantic segmentation framework derived from an enhanced DeepLabV3+ model. First, a lightweight backbone network is adopted to replace the original complex structure, substantially reducing computational burden. Second, a Multi-Scale Self-Attention Module (MSAM) is proposed to replace the traditional Atrous Spatial Pyramid Pooling (ASPP) structure, reducing network computational cost while enhancing the model’s perception capability for fruits of different scales. Finally, an Attention and Convolution Fusion Module (ACFM) is introduced in the decoding stage to significantly improve boundary segmentation accuracy and small target recognition ability. Experimental results on a self-constructed Jixin fruit dataset demonstrated that the proposed MA-DeepLabV3+ model achieves an mIoU of 86.13%, mPA of 91.29%, and F1 score of 90.05%, while reducing the number of parameters by 89.8% and computational cost by 55.3% compared to the original model. The inference speed increased from 41 frames per second (FPS) to 81 FPS, representing an approximately two-fold improvement. The model memory footprint is only 21 MB, demonstrating potential for deployment on embedded devices such as harvesting robots. Experimental results demonstrate that the proposed model achieves significant reductions in computational complexity while maintaining high segmentation accuracy, exhibiting robust performance particularly in complex scenarios involving color gradients, ambiguous boundaries, and occlusion. This study provides technical support for the development of intelligent Jixin fruit harvesting equipment and offers a valuable reference for the application of lightweight deep learning models in smart agriculture. Full article
Show Figures

Figure 1

15 pages, 2027 KB  
Article
Weight Standardization Fractional Binary Neural Network for Image Recognition in Edge Computing
by Chih-Lung Lin, Zi-Qing Liang, Jui-Han Lin, Chun-Chieh Lee and Kuo-Chin Fan
Electronics 2026, 15(2), 481; https://doi.org/10.3390/electronics15020481 - 22 Jan 2026
Viewed by 28
Abstract
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to [...] Read more.
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like advanced RISC machines (ARMs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs) and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This paper aims to reduce the floating-point operations by removing the BN layers from the model and introducing the scaled weight standardization convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations, adaptive gradient clipping (AGC) and knowledge distillation (KD). Specifically, our model maintains a competitive computational cost and accuracy, even without BN layers. Furthermore, by incorporating a series of training methods, the model’s accuracy on CIFAR-100 is 0.6% higher than the baseline model, fractional activation BNN (FracBNN), while the total computational load is only 46% of the baseline model. With unchanged binary operations (BOPs), the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGAs or other edge computers. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

48 pages, 17559 KB  
Article
The Use of GIS Techniques for Land Use in a South Carpathian River Basin—Case Study: Pesceana River Basin, Romania
by Daniela Mihaela Măceșeanu, Remus Crețan, Ionuț-Adrian Drăguleasa, Amalia Niță and Marius Făgăraș
Sustainability 2026, 18(2), 1134; https://doi.org/10.3390/su18021134 - 22 Jan 2026
Viewed by 77
Abstract
This study is essential for medium- and long-term land-use management, as land-use patterns directly influence local economic and social development. Geographic Information System (GIS) techniques are fundamental tools for analyzing a wide range of geomorphological processes, including relief fragmentation density, relief energy, soil [...] Read more.
This study is essential for medium- and long-term land-use management, as land-use patterns directly influence local economic and social development. Geographic Information System (GIS) techniques are fundamental tools for analyzing a wide range of geomorphological processes, including relief fragmentation density, relief energy, soil texture, slope gradient, and slope orientation. The present research focuses on the Pesceana river basin in the Southern Carpathians, Romania. It addresses three main objectives: (1) to analyze land-use dynamics derived from CORINE Land Cover (CLC) data between 1990 and 2018, along with the long-term distribution of the Normalized Difference Vegetation Index (NDVI) for the period 2000–2025; (2) to evaluate the basin’s natural potential byintegrating topographic data (contour lines and profiles) with relief fragmentation density, relief energy, vegetation cover, soil texture, slope gradient, aspect, the Stream Power Index (SPI), and the Topographic Wetness Index (TWI); and (3) to assess the spatial distribution of habitat types, characteristic plant associations, and soil properties obtained through field investigations. For the first two research objectives, ArcGIS v. 10.7.2 served as the main tool for geospatial processing. For the third, field data were essential for geolocating soil samples and defining vegetation types across the entire 247 km2 area. The spatiotemporal analysis from 1990 to 2018 reveals a landscape in which deciduous forests clearly dominate; they expanded from an initial area of 80 km2 in 1990 to over 90 km2 in 2012–2018. This increase, together with agricultural expansion, is reflected in the NDVI values after 2000, which show a sharp increase in vegetation density. Interestingly, other categories—such as water bodies, natural grasslands, and industrial areas—barely changed, each consistently representing less than 1 km2 throughout the study period. These findings emphasize the importance of land-use/land-cover (LULC) data within the applied GIS model, which enhances the spatial characterization of geomorphological processes—such as vegetation distribution, soil texture, slope morphology, and relief fragmentation density. This integration allows a realistic assessment of the physical–geographic, landscape, and pedological conditions of the river basin. Full article
(This article belongs to the Special Issue Agro-Ecosystem Approaches to Sustainable Land Use and Food Security)
Show Figures

Figure 1

27 pages, 5594 KB  
Article
Conditional Tabular Generative Adversarial Network Based Clinical Data Augmentation for Enhanced Predictive Modeling in Chronic Kidney Disease Diagnosis
by Princy Randhawa, Veerendra Nath Jasthi, Kumar Piyush, Gireesh Kumar Kaushik, Malathy Batamulay, S. N. Prasad, Manish Rawat, Kiran Veernapu and Nithesh Naik
BioMedInformatics 2026, 6(1), 6; https://doi.org/10.3390/biomedinformatics6010006 - 22 Jan 2026
Viewed by 106
Abstract
The lack of clinical data for chronic kidney disease (CKD) prediction frequently results in model overfitting and inadequate generalization to novel samples. This research mitigates this constraint by utilizing a Conditional Tabular Generative Adversarial Network (CTGAN) to enhance a constrained CKD dataset sourced [...] Read more.
The lack of clinical data for chronic kidney disease (CKD) prediction frequently results in model overfitting and inadequate generalization to novel samples. This research mitigates this constraint by utilizing a Conditional Tabular Generative Adversarial Network (CTGAN) to enhance a constrained CKD dataset sourced from the University of California, Irvine (UCI) Machine Learning Repository. The CTGAN model was trained to produce realistic synthetic samples that preserve the statistical and feature distributions of the original dataset. Multiple machine learning models, such as AdaBoost, Random Forest, Gradient Boosting, and K-Nearest Neighbors (KNN), were assessed on both the original and enhanced datasets with incrementally increasing degrees of synthetic data dilution. AdaBoost attained 100% accuracy on the original dataset, signifying considerable overfitting; however, the model exhibited enhanced generalization and stability with the CTGAN-augmented data. The occurrence of 100% test accuracy in several models should not be interpreted as realistic clinical performance. Instead, it reflects the limited size, clean structure, and highly separable feature distributions of the UCI CKD dataset. Similar behavior has been reported in multiple previous studies using this dataset. Such perfect accuracy is a strong indication of overfitting and limited generalizability, rather than feature or label leakage. This observation directly motivates the need for controlled data augmentation to introduce variability and improve model robustness. The dataset with the greatest dilution, comprising 2000 synthetic cases, attained a test accuracy of 95.27% utilizing a stochastic gradient boosting approach. Ensemble learning techniques, particularly gradient boosting and random forest, regularly surpassed conventional models like KNN in terms of predicted accuracy and resilience. The results demonstrate that CTGAN-based data augmentation introduces critical variability, diminishes model bias, and serves as an effective regularization technique. This method provides a viable alternative for reducing overfitting and improving predictive modeling accuracy in data-deficient medical fields, such as chronic kidney disease diagnosis. Full article
Show Figures

Figure 1

Back to TopTop