Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (510)

Search Parameters:
Keywords = OpenCV

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1161 KB  
Article
Design of an Intelligent Inspection System for Power Equipment Based on Multi-Technology Integration
by Jie Luo, Jiangtao Guo, Guangxu Zhao, Yan Shao, Ziyi Yin and Gang Li
Electronics 2026, 15(4), 827; https://doi.org/10.3390/electronics15040827 (registering DOI) - 14 Feb 2026
Abstract
With the continuous advancement of the “dual-carbon” strategy, the penetration of renewable energy sources such as wind and photovoltaic (PV) power has steadily increased, imposing more stringent requirements on the safe and stable operation of modern power systems. As the core components of [...] Read more.
With the continuous advancement of the “dual-carbon” strategy, the penetration of renewable energy sources such as wind and photovoltaic (PV) power has steadily increased, imposing more stringent requirements on the safe and stable operation of modern power systems. As the core components of these systems, critical electrical devices operate under harsh conditions characterized by high voltage, strong electromagnetic interference (EMI), and confined high-temperature environments. Their operating status directly affects the reliability of the power supply, and any fault may trigger cascading failures, resulting in significant economic losses. To address the issues of low inspection efficiency, limited fault-identification accuracy, and unstable data transmission in strong-EMI environments, this study proposes an intelligent inspection system for power equipment based on multi-technology integration. The system incorporates a redundant dual-mode wireless transmission architecture combining Wireless Fidelity (Wi-Fi) and Fourth Generation (4G) cellular communication, ensuring reliable data transfer through adaptive link switching and anti-interference optimization. A You Only Look Once version 8 (YOLOv8) object-detection algorithm integrated with Open Source Computer Vision (OpenCV) techniques enables precise visual fault identification. Furthermore, a multi-source data-fusion strategy enhances diagnostic accuracy, while a dedicated monitoring scheme is developed for the water-cooling subsystem to simultaneously assess cooling performance and fault conditions. Experimental validation demonstrates that the proposed system achieves a fault-diagnosis accuracy exceeding 95.5%, effectively meeting the requirements of intelligent inspection in modern power systems and providing robust technical support for the operation and maintenance of critical electrical equipment. Full article
Show Figures

Figure 1

14 pages, 1768 KB  
Article
A Projection-Based, Ground-Level Reactive Agility Test for Soccer: Development and Validation
by Sabri Birlik, Mehmet Yıldız and Uğur Fidan
Appl. Sci. 2026, 16(4), 1798; https://doi.org/10.3390/app16041798 - 11 Feb 2026
Viewed by 95
Abstract
Most existing reactive agility assessments rely on screen-based or light-based stimuli that are spatially separated from the movement execution plane, thereby limiting ecological validity. The purpose of this study was to develop and validate a novel projection-based, ground level reactive agility test (RAT) [...] Read more.
Most existing reactive agility assessments rely on screen-based or light-based stimuli that are spatially separated from the movement execution plane, thereby limiting ecological validity. The purpose of this study was to develop and validate a novel projection-based, ground level reactive agility test (RAT) designed to better reflect the perceptual motor demands of soccer. A total of 57 male soccer players (24 professional and 33 amateur) participated in the study. The system projects sport-specific visual stimuli onto the ground and uses a three-dimensional depth camera to track foot–stimulus interactions in real time. Two reactive agility protocols—a randomized simple reaction test and a randomized selective reaction test—were implemented. Construct validity was examined by comparing reactive agility and planned change-of-direction (PCOD) performance between professional and amateur players, as well as by analyzing relationships between PCOD and RAT outcomes. Professional players demonstrated significantly faster performance than amateurs across all tests (p < 0.01), with larger between-group differences observed in reactive agility compared with PCOD measures. Correlations between PCOD and reactive agility outcomes were low to moderate (r = 0.34–0.61), indicating that reactive agility captures performance components beyond planned movement ability. The reactive agility protocols showed excellent test–retest reliability (ICC = 0.92–0.99) with low measurement error (CV = 0.96–3.47%). In conclusion, the proposed projection-based, ground-level RAT provides a valid and reliable assessment of reactive agility in soccer. By integrating sport-specific stimuli and movement execution within the same spatial plane, the system enhances ecological validity and offers a scalable framework for both performance assessment and perceptual cognitive training in open-skill sports. Full article
(This article belongs to the Special Issue Advanced Studies in Ball Sports Performance)
Show Figures

Figure 1

28 pages, 4875 KB  
Article
Development and Research of a Trough-Profile Seed Guide for Uniform Seed Distribution Across the Working Width of a Sweep Opener
by Nurbol Kakabayev, Kahim Mambetalin, Talgat Zhunusov, Maxat Amantayev, Adilet Sugirbay, Vladimir Odintsov, Saule Uzbergenova and Olzhas Mambetalin
AgriEngineering 2026, 8(2), 57; https://doi.org/10.3390/agriengineering8020057 - 4 Feb 2026
Viewed by 238
Abstract
Sowing represents one of the most critical technological processes in grain production, where seed distribution uniformity directly impacts crop yield by determining plant nutrition area efficiency. Conventional sowing methods with varying row spacings often fail to ensure optimal area utilization. This study enhances [...] Read more.
Sowing represents one of the most critical technological processes in grain production, where seed distribution uniformity directly impacts crop yield by determining plant nutrition area efficiency. Conventional sowing methods with varying row spacings often fail to ensure optimal area utilization. This study enhances subsoil-broadcast sowing quality through a novel trough-profile seed guide that ensures uniform seed distribution across the sweep opener’s working width. The research employed a combined methodology of theoretical analysis, DEM simulation, and experimental studies. Theoretical analysis demonstrated that sowing parameters depend mainly on seeder forward speed and the rotational speed of the seed-metering device’s rollers. DEM simulations visually confirmed the mechanism of ordered seed flow formation within the guide. Experiments simulated drill seeder operation, evaluating forward speed (1.2–2.4 m/s) and fluted roller rotational speed (20–25 rpm) effects on distribution uniformity and sowing instability. The results at 20 rpm with 2.0–3.0 grains per cell showed a standard deviation reaching 0.2–0.5 pcs. (CV: 13.0–24.2%). At 25 rpm, the deviation increased to 0.5–1.0 pcs. (CV: 18.2–39.4%). For total sowing instability at 20 rpm with 10.0–15.0 grains per opener, the standard deviation measured 0.3–3.3 pcs. (CV: 2.8–22.4%), while at 25 rpm, with 15.0–19.0 grains, values reached 0.5–3.9 pcs. (CV: 3.5–19.8%). All parameters conform to agrotechnical requirements, confirming solution effectiveness and addressing the literature gap in uniform seed distribution across the sweep opener’s working width. Full article
Show Figures

Figure 1

20 pages, 1202 KB  
Article
Adaptive ORB Accelerator on FPGA: High Throughput, Power Consumption, and More Efficient Vision for UAVs
by Hussam Rostum and József Vásárhelyi
Signals 2026, 7(1), 13; https://doi.org/10.3390/signals7010013 - 2 Feb 2026
Viewed by 240
Abstract
Feature extraction and description are fundamental components of visual perception systems used in applications such as visual odometry, Simultaneous Localization and Mapping (SLAM), and autonomous navigation. In resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs), achieving real-time hardware acceleration on Field-Programmable Gate Arrays [...] Read more.
Feature extraction and description are fundamental components of visual perception systems used in applications such as visual odometry, Simultaneous Localization and Mapping (SLAM), and autonomous navigation. In resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs), achieving real-time hardware acceleration on Field-Programmable Gate Arrays (FPGAs) is challenging. This work demonstrates an FPGA-based implementation of an adaptive ORB (Oriented FAST and Rotated BRIEF) feature extraction pipeline designed for high-throughput and energy-efficient embedded vision. The proposed architecture is a completely new design for the main algorithmic blocks of ORB, including the FAST (Features from Accelerated Segment Test) feature detector, Gaussian image filtering, moment computation, and descriptor generation. Adaptive mechanisms are introduced to dynamically adjust thresholds and filtering behavior, improving robustness under varying illumination conditions. The design is developed using a High-Level Synthesis (HLS) approach, where all processing modules are implemented as reusable hardware IP cores and integrated at the system level. The architecture is deployed and evaluated on two FPGA platforms, PYNQ-Z2 and KRIA KR260, and its performance is compared against CPU and GPU implementations using a dedicated C++ testbench based on OpenCV. Experimental results demonstrate significant improvements in throughput and energy efficiency while maintaining stable and scalable performance, making the proposed solution suitable for real-time embedded vision applications on UAVs and similar platforms. Notably, the FPGA implementation increases DSP utilization from 11% to 29% compared to the previous designs implemented by other researchers, effectively offloading computational tasks from general purpose logic (LUTs and FFs), reducing LUT usage by 6% and FF usage by 13%, while maintaining overall design stability, scalability, and acceptable thermal margins at 2.387 W. This work establishes a robust foundation for integrating the optimized ORB pipeline into larger drone systems and opens the door for future system-level enhancements. Full article
Show Figures

Figure 1

42 pages, 7342 KB  
Review
A Comprehensive Survey on VANET–IoT Integration Toward the Internet of Vehicles: Architectures, Communications, and System Challenges
by Khalid Kandali, Said Nouh, Lamyae Bennis and Hamid Bennis
Future Transp. 2026, 6(1), 32; https://doi.org/10.3390/futuretransp6010032 - 31 Jan 2026
Viewed by 274
Abstract
The convergence of Vehicular Ad Hoc Networks (VANETs) and the Internet of Things (IoT) is giving rise to the Internet of Vehicles (IoV), a key enabler of next-generation intelligent transportation systems. This survey provides a comprehensive analysis of the architectural, communication, and computing [...] Read more.
The convergence of Vehicular Ad Hoc Networks (VANETs) and the Internet of Things (IoT) is giving rise to the Internet of Vehicles (IoV), a key enabler of next-generation intelligent transportation systems. This survey provides a comprehensive analysis of the architectural, communication, and computing foundations that support VANET–IoT integration. We examine the roles of cloud, edge, and in-vehicle computing, and compare major V2X and IoT communication technologies, including DSRC, C-V2X, MQTT, and CoAP. The survey highlights how sensing, communication, and distributed intelligence interact to support applications such as collision avoidance, cooperative perception, and smart traffic management. We identify four central challenges—security, scalability, interoperability, and energy constraints—and discuss how these issues shape system design across the network stack. In addition, we review emerging directions including 6G-enabled joint communication and sensing, reconfigurable surfaces, digital twins, and quantum-assisted optimization. The survey concludes by outlining open research questions and providing guidance for the development of reliable, efficient, and secure VANET–IoT systems capable of supporting future transportation networks. Full article
Show Figures

Figure 1

5 pages, 173 KB  
Proceeding Paper
From Camera to Algorithm: OpenCV and AI Workshop for the Cybersecurity of the Future
by Pablo Natera-Muñoz, Fernando Broncano-Morgado and Pablo Garcia-Rodriguez
Eng. Proc. 2026, 123(1), 4; https://doi.org/10.3390/engproc2026123004 - 30 Jan 2026
Viewed by 152
Abstract
Artificial vision and artificial intelligence (AI) are increasingly interconnected in cybersecurity. This work presents an overview of OpenCV-based visual computing as a core tool for intelligent security systems that analyze real-time visual data. It includes practical exercises on face, edge, motion, and color [...] Read more.
Artificial vision and artificial intelligence (AI) are increasingly interconnected in cybersecurity. This work presents an overview of OpenCV-based visual computing as a core tool for intelligent security systems that analyze real-time visual data. It includes practical exercises on face, edge, motion, and color detection, forming the basis for advanced object recognition using YOLOv10. Real applications, such as document processing and camera-based anomaly detection, are implemented in a microservice architecture with OpenCV, and deep learning frameworks. Integrating computer vision and AI is shown to be essential for developing resilient and autonomous cybersecurity infrastructures. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
25 pages, 7647 KB  
Article
Urban Morphology, Deep Learning, and Artificial Intelligence-Based Characterization of Urban Heritage with the Recognition of Urban Patterns
by Elif Sarihan and Éva Lovra
Land 2026, 15(2), 230; https://doi.org/10.3390/land15020230 - 29 Jan 2026
Viewed by 211
Abstract
The tangible patterns of urban heritage sites are composed of complex components, and their interaction is involved in the process of formation and transformation. The past of cities also partially survives in the structure of the settlement, even if many buildings are demolished [...] Read more.
The tangible patterns of urban heritage sites are composed of complex components, and their interaction is involved in the process of formation and transformation. The past of cities also partially survives in the structure of the settlement, even if many buildings are demolished or significantly transformed. In this study, we introduce a model based on the integration of urban morphology, deep learning, and artificial intelligence methods for exploring the tangible patterns of urban heritage areas at different levels of scale. The proposed model is able to define and recognize the characteristics of the basic elements of urban forms at different resolution levels and reveal the patterns of the heritage. The basic principle of the model is the analysis of urban heritage sites located in different parts of the historical city center of Istanbul. We first define the relationship patterns and complexity levels, and form the characteristics of the urban form by using geographic information systems (GIS), based on the cartographic and contemporary maps. We then employ deep-learning-based convolutional neural networks (CNNs) for automatic segmentation, using OpenCV and NumPy in Python to extract streets and blocks on both historical and contemporary map sources. Based on the results integrated with human intelligence and the CNNs model, we finally generate several prompts for AI for better reasoning in the process of pattern recognition. Our results reveal that this integration increases GPT-4o’s assumptions in the pattern recognition process and, thus, it is able to reveal similar results to those obtained from the form features with different levels of specificity that are interdependent and complementary to human assessments. Full article
(This article belongs to the Special Issue Urban Morphology: A Perspective from Space (3rd Edition))
Show Figures

Figure 1

30 pages, 2354 KB  
Article
Augmented Reality vs. 2D in Basic Dental Education: Learning Outcomes, Visual Fatigue, and Technology Acceptance—A Mixed Methods Study
by Gloria Pérez-López-de-Echazarreta, María Consuelo Sáiz-Manzanares, María Camino Escolar-Llamazares and Lisa Alves-Gomes
Appl. Sci. 2026, 16(3), 1269; https://doi.org/10.3390/app16031269 - 27 Jan 2026
Viewed by 199
Abstract
In health sciences, the population-level burden of dental caries makes oral health education and the integration of theory and practice a priority. This quasi-experimental study examined whether augmented reality (AR) using the Merge Object Viewer improves basic dental knowledge, is associated with visual [...] Read more.
In health sciences, the population-level burden of dental caries makes oral health education and the integration of theory and practice a priority. This quasi-experimental study examined whether augmented reality (AR) using the Merge Object Viewer improves basic dental knowledge, is associated with visual symptoms, and is acceptable compared with two-dimensional (2D) materials. A total of 321 students enrolled in health-related programmes participated and were assigned to three AR/2D sequences across three blocks (healthy dentition, cariogenesis, and pain management). Outcomes included knowledge (15-item test, pre and post intervention), computer vision syndrome (CVS-Q), acceptance (TAM-AR), and open-ended comments. Knowledge improved in all groups: 2D materials were superior for dentition, AR for cariogenesis, and both were comparable for pain. Two-thirds met criteria for symptoms on the CVS-Q, with a lower prevalence in the AR–2D–AR sequence. Acceptance was high, and comments highlighted usefulness, ease of use, and enjoyment, but also noted language issues and technical overload. Overall, AR appears to be a complementary tool to 2D materials in basic dental education. Full article
Show Figures

Graphical abstract

34 pages, 7495 KB  
Article
Advanced Consumer Behaviour Analysis: Integrating Eye Tracking, Machine Learning, and Facial Recognition
by José Augusto Rodrigues, António Vieira de Castro and Martín Llamas-Nistal
J. Eye Mov. Res. 2026, 19(1), 9; https://doi.org/10.3390/jemr19010009 - 19 Jan 2026
Viewed by 386
Abstract
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and [...] Read more.
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and social desirability effects, the proposed approach relies on direct behavioural measurements of visual attention. The system captures gaze distribution and fixation dynamics during interaction with products or interfaces. It uses AOI-level eye tracking metrics as the sole behavioural signal to infer candidate choice under constrained experimental conditions. In parallel, OpenCV and ML perform facial analysis to estimate demographic attributes (age, gender, and ethnicity). These attributes are collected independently and linked post hoc to gaze-derived outcomes. Demographics are not used as predictive features for choice inference. Instead, they are used as contextual metadata to support stratified, segment-level interpretation. Empirical results show that gaze-based inference closely reproduces observed choice distributions in short-horizon, visually driven tasks. Demographic estimates enable meaningful post hoc segmentation without affecting the decision mechanism. Together, these results show that multimodal integration can move beyond descriptive heatmaps. The platform produces reproducible decision-support artefacts, including AOI rankings, heatmaps, and segment-level summaries, grounded in objective behavioural data. By separating the decision signal (gaze) from contextual descriptors (demographics), this work contributes a reusable end-to-end platform for marketing and UX research. It supports choice inference under constrained conditions and segment-level interpretation without demographic priors in the decision mechanism. Full article
Show Figures

Figure 1

17 pages, 1176 KB  
Article
Portable Raspberry Pi Platform for Automated Interpretation of Lateral Flow Strip Tests
by Natalia Nakou, Panagiotis K. Tsikas and Despina P. Kalogianni
Sensors 2026, 26(2), 598; https://doi.org/10.3390/s26020598 - 15 Jan 2026
Viewed by 334
Abstract
Paper-based rapid tests are widely used in point-of-care diagnostics due to their simplicity and low cost. However, their application in quantitative analysis remains limited. In this work, a nucleic acid lateral flow assay (NALFA) was integrated with an automated image acquisition system built [...] Read more.
Paper-based rapid tests are widely used in point-of-care diagnostics due to their simplicity and low cost. However, their application in quantitative analysis remains limited. In this work, a nucleic acid lateral flow assay (NALFA) was integrated with an automated image acquisition system built on a Raspberry Pi platform for the quantitative detection of SARS-CoV-2 virus, increasing the accuracy of the test compared to subjective visual interpretation. The assay employed blue polystyrene microspheres as reporters, while automated image capturing, image processing and quantification were performed using custom Python software (version 3.12). Signal quantification was achieved by comparing the grayscale intensity of the test line with that of a simultaneously captured negative control strip, allowing correction for illumination and background variability. Calibration curves were used for the training of the algorithm. The system was applied for the analysis of a series of samples with varying DNA concentrations, yielding recoveries between 84 and 108%. The proposed approach integrates a simple biosensor with an accessible computational platform to achieve full low-cost automation. This work introduces the first Raspberry Pi-driven image processing approach for accurate quantification of NALFAs and establishes a foundation for future low-cost, portable diagnostic systems targeting diverse nucleic acids, proteins, and biomarkers. Full article
(This article belongs to the Special Issue Development and Application of Optical Chemical Sensing)
Show Figures

Figure 1

26 pages, 9336 KB  
Article
Simulation of Pedestrian Grouping and Avoidance Behavior Using an Enhanced Social Force Model
by Xiaoping Zhao, Wenjie Li, Zhenlong Mo, Yunqiang Xue and Huan Wu
Sustainability 2026, 18(2), 746; https://doi.org/10.3390/su18020746 - 12 Jan 2026
Viewed by 295
Abstract
To address the limitations of conventional social force models in simulating high-density pedestrian crowds, this study proposes an enhanced model that incorporates visual perception constraints, group-type labeling, and collective avoidance mechanisms. Pedestrian trajectories were extracted from a bidirectional commercial street scenario using OpenCV, [...] Read more.
To address the limitations of conventional social force models in simulating high-density pedestrian crowds, this study proposes an enhanced model that incorporates visual perception constraints, group-type labeling, and collective avoidance mechanisms. Pedestrian trajectories were extracted from a bidirectional commercial street scenario using OpenCV, with YOLOv8 and DeepSORT employed for multiple object tracking. Analysis of pedestrian grouping patterns revealed that 52% of pedestrians walked in pairs, with distinct avoidance behaviors observed. The improved model integrates three key mechanisms: a restricted 120° forward visual field, group-type classification based on social relationships, and an exponentially formulated inter-group repulsive force. Simulation results in MATLAB R2023b demonstrate that the proposed model outperforms conventional approaches in multiple aspects: speed distribution (error < 8%); spatial density overlap (>85%); trajectory similarity (reduction of 32% in Dynamic Time Warping distance); and avoidance behavior accuracy (82% simulated vs. 85% measured). This model serves as a quantitative simulation tool and decision-making basis for the planning of pedestrian spaces, crowd organization management, and the optimization of emergency evacuation schemes in high-density pedestrian areas such as commercial streets and subway stations. Consequently, it contributes to enhancing pedestrian mobility efficiency and public safety, thereby supporting the development of a sustainable urban slow transportation system. Full article
(This article belongs to the Collection Advances in Transportation Planning and Management)
Show Figures

Figure 1

25 pages, 92335 KB  
Article
A Lightweight Dynamic Counting Algorithm for the Maize Seedling Population in Agricultural Fields for Embedded Applications
by Dongbin Liu, Jiandong Fang and Yudong Zhao
Agronomy 2026, 16(2), 176; https://doi.org/10.3390/agronomy16020176 - 10 Jan 2026
Viewed by 232
Abstract
In the field management of maize, phenomena such as missed sowing and empty seedlings directly affect the final yield. By implementing seedling replenishment activities and promptly evaluating seedling growth, maize output can be increased by improving seedling survival rates. To address the challenges [...] Read more.
In the field management of maize, phenomena such as missed sowing and empty seedlings directly affect the final yield. By implementing seedling replenishment activities and promptly evaluating seedling growth, maize output can be increased by improving seedling survival rates. To address the challenges posed by complex field environments (including varying light conditions, weeds, and foreign objects), as well as the performance limitations of model deployment on resource-constrained devices, this study proposes a Lightweight Real-Time You Only Look Once (LRT-YOLO) model. This model builds upon the You Only Look Once version 11n (YOLOv11n) framework by designing a lightweight, optimized feature architecture (OF) that enables the model to focus on the characteristics of small to medium-sized maize seedlings. The feature fusion network incorporates two key modules: the Feature Complementary Mapping Module (FCM) and the Multi-Kernel Perception Module (MKP). The FCM captures global features of maize seedlings through multi-scale interactive learning, while the MKP enhances the network’s ability to learn multi-scale features by combining different convolution kernels with pointwise convolution. In the detection head component, the introduction of an NMS-free design philosophy has significantly enhanced the model’s detection performance while simultaneously reducing its inference time. The experiments show that the mAP50 and mAP50:95 of the LRT-YOLO model reached 95.9% and 63.6%, respectively. The model has only 0.86M parameters and a size of just 2.35 M, representing reductions of 66.67% and 54.89% in the number of parameters and model size compared to YOLOv11n. To enable mobile deployment in field environments, this study integrates the LRT-YOLO model with the ByteTrack multi-object tracking algorithm and deploys it on the NVIDIA Jetson AGX Orin platform, utilizing OpenCV tools to achieve real-time visualization of maize seedling tracking and counting. Experiments demonstrate that the frame rate (FPS) achieved with TensorRT acceleration reached 23.49, while the inference time decreased by 38.93%. Regarding counting performance, when tested using static image data, the coefficient of determination (R2) and root mean square error (RMSE) were 0.988 and 5.874, respectively. The cross-line counting method was applied to test the video data, resulting in an R2 of 0.971 and an RMSE of 16.912, respectively. Experimental results show that the proposed method demonstrates efficient performance on edge devices, providing robust technical support for the rapid, non-destructive counting of maize seedlings in field environments. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 733 KB  
Article
Explaining Logistics Performance, Economic Growth, and Carbon Emissions Through Machine Learning and SHAP Interpretability
by Maide Betül Baydar and Mustafa Mete
Sustainability 2026, 18(2), 585; https://doi.org/10.3390/su18020585 - 7 Jan 2026
Viewed by 315
Abstract
This study provides a multi-faceted and detailed perspective on the relationships between logistics performance, environmental degradation, and economic growth in 38 OECD countries, using each as an individual target variable. In the Analysis section, the relationship between logistics and environment is examined within [...] Read more.
This study provides a multi-faceted and detailed perspective on the relationships between logistics performance, environmental degradation, and economic growth in 38 OECD countries, using each as an individual target variable. In the Analysis section, the relationship between logistics and environment is examined within a broader context, taking economic indicators into account. This examination utilizes the machine learning algorithms Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). For each algorithm, the dataset is split into training and testing sets using three different ratios: 90:10, 80:20, and 70:30. A comprehensive performance evaluation is conducted on each of these splits by applying 5-fold and 10-fold cross-validation (CV). Considering economic indicators, the analysis section examines how the logistics-environment relationship is shaped in a broader context using the machine learning algorithms RF, XGBoost, and LightGBM. MSE, MAE, RMSE, MAPE, and R2 metrics are utilized to evaluate model performance, while MDA and SHAP are employed to assess feature importance. Furthermore, a bee swarm plot is leveraged for visualizing the results. The XGBoost algorithm can successfully predict carbon dioxide (CO2) emissions from transport and economic growth with high accuracy. However, the logistics performance model achieves high performance only with the LightGBM algorithm using a 90% train, 10% test split, and 5-fold CV setup. Based on the variable importance levels of the best-performing algorithm for each of the three target variables separately, the prediction of logistics performance is largely dependent on the economic growth predictor, and secondly, on the trade openness predictor. In predicting CO2 emissions from transport, economic growth is identified as the most effective predictor, while logistics performance and trade openness contribute the least to the prediction. The findings also reveal that transport-related emissions and environmental indicators are prominent in the prediction of economic growth, whereas logistics performance and trade openness play a supportive, yet secondary role. Full article
Show Figures

Figure 1

13 pages, 518 KB  
Article
Test–Retest Reliability of Balance Parameters Obtained with a Force Platform in Individuals with Chronic Obstructive Pulmonary Disease
by Igor Lopes de Brito, Walter Sepúlveda-Loyola, Larissa Araújo de Castro, Leidy Tatiana Ordoñez-Mora, Ademilson Julio da Silva Junior and Vanessa S. Probst
J. Funct. Morphol. Kinesiol. 2026, 11(1), 24; https://doi.org/10.3390/jfmk11010024 - 1 Jan 2026
Viewed by 407
Abstract
Background: Impaired postural balance is a common feature in individuals with chronic obstructive pulmonary disease (COPD), increasing their risk of falls. This study aimed to evaluate the test–retest reliability of force platform parameters used to assess postural balance in individuals with COPD. [...] Read more.
Background: Impaired postural balance is a common feature in individuals with chronic obstructive pulmonary disease (COPD), increasing their risk of falls. This study aimed to evaluate the test–retest reliability of force platform parameters used to assess postural balance in individuals with COPD. Methods: A test–retest reliability study was conducted with participants diagnosed with moderate to severe COPD. Each participant completed two standardized balance assessments on a force platform, separated by a seven-day interval. Center of pressure (COP) parameters—including sway area, mean velocity, and path length—were analyzed under eyes-open and eyes-closed conditions. Reliability was determined using intraclass correlation coefficients (ICC), standard error of measurement (SEM), and coefficient of variation (CV). Correlations were performed between force platform parameters, the Timed Up and Go (TUG) test, and the Downton Fall Risk Scale. Results: Twenty individuals with COPD (mean age: 67.8 ± 6.1 years; forced expiratory value in the first second: 54 ± 12% predicted) were evaluated. The COP parameters demonstrated good to excellent test–retest reliability (ICC = 0.82–0.95) across all conditions, with low measurement error (SEM < 10%). Moderate correlations were found between force platform parameters and both TUG performance (r = 0.52–0.67) and Downton scores (r = 0.48–0.61). Conclusions: Force platform measurements show high reliability for assessing postural balance in individuals with COPD. These findings support the use of objective balance assessment tools in pulmonary rehabilitation and for monitoring fall risk in this population. Full article
Show Figures

Figure 1

19 pages, 1559 KB  
Article
FPGA Modular Scalability Framework for Real-Time Noise Reduction in Images
by Ng Boon Khai, Norfadila Mahrom, Rafikha Aliana A. Raof, Teo Sje Yin and Phaklen Ehkan
Computers 2026, 15(1), 13; https://doi.org/10.3390/computers15010013 - 1 Jan 2026
Viewed by 415
Abstract
Image noise degrades image quality in applications such as medical imaging, surveillance, and remote sensing, where real-time processing and high accuracy are critical. Software-based denoising can be flexible, but often suffers from latency and low throughput when deployed on embedded or edge systems. [...] Read more.
Image noise degrades image quality in applications such as medical imaging, surveillance, and remote sensing, where real-time processing and high accuracy are critical. Software-based denoising can be flexible, but often suffers from latency and low throughput when deployed on embedded or edge systems. A Field Programmable Gate Array (FPGA)-based system offers parallelism and lower latency, but the existing work typically focusses on fixed architectures rather than scalable framework supporting multiple filter models. This paper presents a high-performance, resource-efficient FPGA-based framework for real-time noise reduction. The modular, pipelined architecture integrates median and adaptive filters, managed by a state machine-based control unit to enhance processing efficiency. Implemented on a Cyclone V FPGA using Quartus Prime 22.1std, the system provides scalability through adjustable Random Access Memory (RAM) and supports multiple denoising algorithms. Tested on Leena images with salt-and-pepper noise, it processes 10% noise in 1.724 ms in a simulated environment running at 800 MHz; it was compared with Python version 3.11.2 with the OpenCV-library version 4.8.076 on a general-purpose Central Processing Unit (CPU) (0.0201 ms). The proposed solution demonstrates low latency and high throughput, making it well-suited for embedded and edge computing applications. Full article
Show Figures

Figure 1

Back to TopTop