Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (55)

Search Parameters:
Keywords = manual dual-task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7045 KB  
Article
Convolutional Neural Networks for Hole Inspection in Aerospace Systems
by Garrett Madison, Grayson Michael Griser, Gage Truelson, Cole Farris, Christopher Lee Colaw and Yildirim Hurmuzlu
Sensors 2025, 25(18), 5921; https://doi.org/10.3390/s25185921 - 22 Sep 2025
Viewed by 198
Abstract
Foreign object debris (FOd) in rivet holes, machined holes, and fastener sites poses a critical risk to aerospace manufacturing, where current inspections rely on manual visual checks with flashlights and mirrors. These methods are slow, fatiguing, and prone to error. This work introduces [...] Read more.
Foreign object debris (FOd) in rivet holes, machined holes, and fastener sites poses a critical risk to aerospace manufacturing, where current inspections rely on manual visual checks with flashlights and mirrors. These methods are slow, fatiguing, and prone to error. This work introduces HANNDI, a compact handheld inspection device that integrates controlled optics, illumination, and onboard deep learning for rapid and reliable inspection directly on the factory floor. The system performs focal sweeps, aligns and fuses the images into an all-in-focus representation, and applies a dual CNN pipeline based on the YOLO architecture: one network detects and localizes holes, while the other classifies debris. All training images were collected with the prototype, ensuring consistent geometry and lighting. On a withheld test set from a proprietary ≈3700 image dataset of aerospace assets, HANNDI achieved per-class precision and recall near 95%. An end-to-end demonstration on representative aircraft parts yielded an effective task time of 13.6 s per hole. To our knowledge, this is the first handheld automated optical inspection system that combines mechanical enforcement of imaging geometry, controlled illumination, and embedded CNN inference, providing a practical path toward robust factory floor deployment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

10 pages, 4186 KB  
Proceeding Paper
Indirect Crop Line Detection in Precision Mechanical Weeding Using AI: A Comparative Analysis of Different Approaches
by Ioannis Glykos, Gerassimos G. Peteinatos and Konstantinos G. Arvanitis
Eng. Proc. 2025, 104(1), 32; https://doi.org/10.3390/engproc2025104032 - 25 Aug 2025
Viewed by 314
Abstract
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the [...] Read more.
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the crops in both the inter- and intra-row areas. The use of AI-based computer vision can assist in detecting crop lines and accurately guiding weeding tools. Additionally, AI-driven image analysis can be used for selective intra-row weeding with mechanized blades, distinguishing crops from weeds. However, until now, there have been two separate systems for these tasks. To enable simultaneous in-row weeding and row alignment, YOLOv8n and YOLO11n were trained and compared in a lettuce field (Lactuca sativa L.). The models were evaluated based on different metrics and inference time for three different image sizes. Crop lines were generated through linear regression on the bounding box centers of detected plants and compared against manually drawn ground truth lines, generated during the annotation process, using different deviation metrics. As more than one line appeared per image, the proposed methodology for classifying points in their corresponding crop line was tested for three different approaches with different empirical factor values. The best-performing approach achieved a mean horizontal error of 45 pixels, demonstrating the feasibility of a dual-functioning system using a single vision model. Full article
Show Figures

Figure 1

33 pages, 5056 KB  
Article
Interpretable Deep Learning Models for Arrhythmia Classification Based on ECG Signals Using PTB-X Dataset
by Ahmed E. Mansour Atwa, El-Sayed Atlam, Ali Ahmed, Mohamed Ahmed Atwa, Elsaid Md. Abdelrahim and Ali I. Siam
Diagnostics 2025, 15(15), 1950; https://doi.org/10.3390/diagnostics15151950 - 4 Aug 2025
Viewed by 1648
Abstract
Background/Objectives: Automatic classification of ECG signal arrhythmias plays a vital role in early cardiovascular diagnostics by enabling prompt detection of life-threatening conditions. Manual ECG interpretation is labor-intensive and susceptible to errors, highlighting the demand for automated, scalable approaches. Deep learning (DL) methods are [...] Read more.
Background/Objectives: Automatic classification of ECG signal arrhythmias plays a vital role in early cardiovascular diagnostics by enabling prompt detection of life-threatening conditions. Manual ECG interpretation is labor-intensive and susceptible to errors, highlighting the demand for automated, scalable approaches. Deep learning (DL) methods are effective in ECG analysis due to their ability to learn complex patterns from raw signals. Methods: This study introduces two models: a custom convolutional neural network (CNN) with a dual-branch architecture for processing ECG signals and demographic data (e.g., age, gender), and a modified VGG16 model adapted for multi-branch input. Using the PTB-XL dataset, a widely adopted large-scale ECG database with over 20,000 recordings, the models were evaluated on binary, multiclass, and subclass classification tasks across 2, 5, 10, and 15 disease categories. Advanced preprocessing techniques, combined with demographic features, significantly enhanced performance. Results: The CNN model achieved up to 97.78% accuracy in binary classification and 79.7% in multiclass tasks, outperforming the VGG16 model (97.38% and 76.53%, respectively) and state-of-the-art benchmarks like CNN-LSTM and CNN entropy features. This study also emphasizes interpretability, providing lead-specific insights into ECG contributions to promote clinical transparency. Conclusions: These results confirm the models’ potential for accurate, explainable arrhythmia detection and their applicability in real-world healthcare diagnostics. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

19 pages, 2359 KB  
Article
Research on Concrete Crack Damage Assessment Method Based on Pseudo-Label Semi-Supervised Learning
by Ming Xie, Zhangdong Wang and Li’e Yin
Buildings 2025, 15(15), 2726; https://doi.org/10.3390/buildings15152726 - 1 Aug 2025
Viewed by 537
Abstract
To address the inefficiency of traditional concrete crack detection methods and the heavy reliance of supervised learning on extensive labeled data, in this study, an intelligent assessment method of concrete damage based on pseudo-label semi-supervised learning and fractal geometry theory is proposed to [...] Read more.
To address the inefficiency of traditional concrete crack detection methods and the heavy reliance of supervised learning on extensive labeled data, in this study, an intelligent assessment method of concrete damage based on pseudo-label semi-supervised learning and fractal geometry theory is proposed to solve two core tasks: one is binary classification of pixel-level cracks, and the other is multi-category assessment of damage state based on crack morphology. Using three-channel RGB images as input, a dual-path collaborative training framework based on U-Net encoder–decoder architecture is constructed, and a binary segmentation mask of the same size is output to achieve the accurate segmentation of cracks at the pixel level. By constructing a dual-path collaborative training framework and employing a dynamic pseudo-label refinement mechanism, the model achieves an F1-score of 0.883 using only 50% labeled data—a mere 1.3% decrease compared to the fully supervised benchmark DeepCrack (F1 = 0.896)—while reducing manual annotation costs by over 60%. Furthermore, a quantitative correlation model between crack fractal characteristics and structural damage severity is established by combining a U-Net segmentation network with the differential box-counting algorithm. The experimental results demonstrate that under a cyclic loading of 147.6–221.4 kN, the fractal dimension monotonically increases from 1.073 (moderate damage) to 1.189 (failure), with 100% accuracy in damage state identification, closely aligning with the degradation trend of macroscopic mechanical properties. In complex crack scenarios, the model attains a recall rate (Re = 0.882), surpassing U-Net by 13.9%, with significantly enhanced edge reconstruction precision. Compared with the mainstream models, this method effectively alleviates the problem of data annotation dependence through a semi-supervised strategy while maintaining high accuracy. It provides an efficient structural health monitoring solution for engineering practice, which is of great value to promote the application of intelligent detection technology in infrastructure operation and maintenance. Full article
Show Figures

Figure 1

19 pages, 7168 KB  
Article
MTD-YOLO: An Improved YOLOv8-Based Rice Pest Detection Model
by Feng Zhang, Chuanzhao Tian, Xuewen Li, Na Yang, Yanting Zhang and Qikai Gao
Electronics 2025, 14(14), 2912; https://doi.org/10.3390/electronics14142912 - 21 Jul 2025
Viewed by 710
Abstract
The impact of insect pests on the yield and quality of rice is extremely significant, and accurate detection of insect pests is of crucial significance to safeguard rice production. However, traditional manual inspection methods are inefficient and subjective, while existing machine learning-based approaches [...] Read more.
The impact of insect pests on the yield and quality of rice is extremely significant, and accurate detection of insect pests is of crucial significance to safeguard rice production. However, traditional manual inspection methods are inefficient and subjective, while existing machine learning-based approaches still suffer from limited generalization and suboptimal accuracy. To address these challenges, this study proposes an improved rice pest detection model, MTD-YOLO, based on the YOLOv8 framework. First, the original backbone is replaced with MobileNetV3, which leverages optimized depthwise separable convolutions and the Hard-Swish activation function through neural architecture search, effectively reducing parameters while maintaining multiscale feature extraction capabilities. Second, a Cross Stage Partial module with Triplet Attention (C2f-T) module incorporating Triplet Attention is introduced to enhance the model’s focus on infested regions via a channel-patial dual-attention mechanism. In addition, a Dynamic Head (DyHead) is introduced to adaptively focus on pest morphological features using the scale–space–task triple-attention mechanism. The experiments were conducted using two datasets, Rice Pest1 and Rice Pest2. On Rice Pest1, the model achieved a precision of 92.5%, recall of 90.1%, mAP@0.5 of 90.0%, and mAP@[0.5:0.95] of 67.8%. On Rice Pest2, these metrics improved to 95.6%, 92.8%, 96.6%, and 82.5%, respectively. The experimental results demonstrate the high accuracy and efficiency of the model in the rice pest detection task, providing strong support for practical applications. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 5632 KB  
Article
Dynamic Relevance-Weighting-Based Width-Adaptive Auto-Encoder
by Malak Almejalli, Ouiem Bchir and Mohamed Maher Ben Ismail
Appl. Sci. 2025, 15(12), 6455; https://doi.org/10.3390/app15126455 - 8 Jun 2025
Cited by 1 | Viewed by 979
Abstract
This paper proposes a novel adaptive autoencoder model that autonomously determines the optimal latent width during training. Unlike traditional autoencoders with fixed architectures, the proposed method introduces a dynamic relevance weighting mechanism that assigns adaptive importance to each node in the hidden layer. [...] Read more.
This paper proposes a novel adaptive autoencoder model that autonomously determines the optimal latent width during training. Unlike traditional autoencoders with fixed architectures, the proposed method introduces a dynamic relevance weighting mechanism that assigns adaptive importance to each node in the hidden layer. This distinctive feature enables the simultaneous learning of both the model parameters and its structure. A newly formulated cost function governs this dual optimization, allowing the hidden layer to expand or contract based on the complexity of the input data. This adaptability results in a more compact and expressive latent representation, making the model particularly effective in handling diverse and complex recognition tasks. The originality of this work lies in its unsupervised, self-adjusting architecture that eliminates the need for manual design or pruning heuristics. The approach was rigorously evaluated on benchmark datasets (MNIST, CIFAR-10) and real-world datasets (Parkinson, Epilepsy), using classification accuracy and computational cost as key performance metrics. It demonstrates superior performance compared to state-of-the-art models in terms of accuracy and representational efficiency. Full article
(This article belongs to the Special Issue Advances in Neural Networks and Deep Learning)
Show Figures

Figure 1

21 pages, 5936 KB  
Article
Research on Intelligent Control Technology for a Rail-Based High-Throughput Crop Phenotypic Platform Based on Digital Twins
by Haishen Liu, Weiliang Wen, Wenbo Gou, Xianju Lu, Hanyu Ma, Lin Zhu, Minggang Zhang, Sheng Wu and Xinyu Guo
Agriculture 2025, 15(11), 1217; https://doi.org/10.3390/agriculture15111217 - 2 Jun 2025
Viewed by 832
Abstract
Rail-based crop phenotypic platforms operating in open-field environments face challenges such as environmental variability and unstable data quality, highlighting the urgent need for intelligent, online data acquisition strategies. This study proposes a digital twin-based data acquisition strategy tailored to such platforms. A closed-loop [...] Read more.
Rail-based crop phenotypic platforms operating in open-field environments face challenges such as environmental variability and unstable data quality, highlighting the urgent need for intelligent, online data acquisition strategies. This study proposes a digital twin-based data acquisition strategy tailored to such platforms. A closed-loop architecture “comprising connection, computation, prediction, decision-making, and execution“ was developed to build DT-FieldPheno, a digital twin system that enables real-time synchronization between physical equipment and its virtual counterpart, along with dynamic device monitoring. Weather condition standards were defined based on multi-source sensor requirements, and a dual-layer weather risk assessment model was constructed using the analytic hierarchy process (AHP) and fuzzy comprehensive evaluation by integrating weather forecasts and real-time meteorological data to guide adaptive data acquisition scheduling. Field deployment over 27 consecutive days in a maize field demonstrated that DT-FieldPheno reduced the manual inspection workload by 50%. The system successfully identified and canceled two high-risk tasks under wind-speed threshold exceedance and optimized two others affected by gusts and rainfall, thereby avoiding ineffective operations. It also achieved sub-second responses to trajectory deviation and communication anomalies. The synchronized digital twin interface supported remote, real-time visual supervision. DT-FieldPheno provides a technological paradigm for advancing crop phenotypic platforms toward intelligent regulation, remote management, and multi-system integration. Future work will focus on expanding multi-domain sensing capabilities, enhancing model adaptability, and evaluating system energy consumption and computational overhead to support scalable field deployment. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 2560 KB  
Article
Research on Composite Robot Scheduling and Task Allocation for Warehouse Logistics Systems
by Shuzhao Dong and Bin Yang
Sustainability 2025, 17(11), 5051; https://doi.org/10.3390/su17115051 - 30 May 2025
Viewed by 990
Abstract
With the rapid development of e-commerce, warehousing and logistics systems are facing the dual challenges of increasing order processing demand and green and low-carbon transformation. Traditional manual and single-robot scheduling methods are not only limited in efficiency, but will also make it difficult [...] Read more.
With the rapid development of e-commerce, warehousing and logistics systems are facing the dual challenges of increasing order processing demand and green and low-carbon transformation. Traditional manual and single-robot scheduling methods are not only limited in efficiency, but will also make it difficult to meet the strategic needs of sustainable development due to their high energy consumption and resource redundancy. Therefore, in order to respond to the sustainable development goals of green logistics and resource optimization, this paper replaces the traditional mobile handling robot in warehousing and logistics with a composite robot composed of a mobile chassis and a robotic arm, which reduces energy consumption and labor costs by reducing manual intervention and improving the level of automation. Based on the traditional contract net protocol framework, a distributed task allocation strategy optimization method based on an improved genetic algorithm is proposed. This framework achieves real-time optimization of the robot task list and enhances the rationality of the task allocation strategy. By combining the improved genetic algorithm with the contract net protocol, multi-robot multi-task allocation is realized. The experimental results show that the improvement strategy can effectively support the transformation of the warehousing and logistics system to a low-carbon and intelligent sustainable development mode while improving the rationality of task allocation. Full article
Show Figures

Figure 1

25 pages, 7867 KB  
Article
Autonomous UAV Detection of Ochotona curzoniae Burrows with Enhanced YOLOv11
by Huimin Zhao, Linqi Jia, Yuankai Wang and Fei Yan
Drones 2025, 9(5), 340; https://doi.org/10.3390/drones9050340 - 30 Apr 2025
Cited by 2 | Viewed by 732
Abstract
The Tibetan Plateau is a critical ecological habitat where the overpopulation of plateau pika (Ochotona curzoniae), a keystone species, accelerates grassland degradation through excessive burrowing and herbivory, threatening ecological balance and human activities. To address the inefficiency and high costs of [...] Read more.
The Tibetan Plateau is a critical ecological habitat where the overpopulation of plateau pika (Ochotona curzoniae), a keystone species, accelerates grassland degradation through excessive burrowing and herbivory, threatening ecological balance and human activities. To address the inefficiency and high costs of traditional pika burrow monitoring, this study proposes an intelligent monitoring solution that integrates drone remote sensing with deep learning. By combining the lightweight visual Transformer architecture EfficientViT with the hybrid attention mechanism CBAM, we develop an enhanced YOLOv11-AEIT algorithm: (1) EfficientViT is employed as the backbone network, strengthening micro-burrow feature representation through a multi-scale feature coupling mechanism that alternates between local window attention and global dilated attention; (2) the integration of CBAM (Convolutional Block Attention Module) in the feature fusion neck reduces false detections through dual-channel spatial attention filtering. Evaluations on our custom PPCave2025 dataset show that the enhanced model achieves a 98.6% mAP@0.5, outperforming the baseline YOLOv11 by 3.5 percentage points, with precision and recall improvements of 4.8% and 7.2%, respectively. The algorithm enhances efficiency by a factor of 15 compared to manual inspection, while seamlessly meeting real-time drone detection requirements. This approach provides high-precision yet lightweight technical support for plateau ecological conservation and serves as a valuable methodological reference for similar ecological monitoring tasks. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Figure 1

18 pages, 14637 KB  
Article
Enhancing Bottleneck Concept Learning in Image Classification
by Xingfu Cheng, Zhaofeng Niu, Zhouqiang Jiang and Liangzhi Li
Sensors 2025, 25(8), 2398; https://doi.org/10.3390/s25082398 - 10 Apr 2025
Viewed by 1176
Abstract
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- [...] Read more.
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- or concept-based explanations, existing approaches are often limited by the need for manually defined concepts, overly abstract granularity, or misalignment with human semantics. This paper introduces the Enhanced Bottleneck Concept Learner (E-BotCL), a self-supervised framework that autonomously discovers task-relevant, interpretable semantic concepts via a dual-path contrastive learning strategy and multi-task regularization. By combining contrastive learning to build robust concept prototypes, attention mechanisms for spatial localization, and feature aggregation to activate concepts, E-BotCL enables end-to-end concept learning and classification without requiring human supervision. Experiments conducted on the CUB200 and ImageNet datasets demonstrated that E-BotCL significantly enhanced interpretability while maintaining classification accuracy. Specifically, two interpretability metrics, the Concept Discovery Rate (CDR) and Concept Consistency (CC), improved by 0.6104 and 0.4486, respectively. This work advances the balance between model performance and transparency, offering a scalable solution for interpretable decision-making in complex vision tasks. Full article
Show Figures

Figure 1

28 pages, 5143 KB  
Article
Innovative Blade and Tine Push Weeder for Enhancing Weeding Efficiency of Small Farmers
by Kalluri Praveen, Ningaraj Belagalla, Nagaraju Dharavat, Leander Corrie and Gireesha D
Sustainability 2025, 17(6), 2639; https://doi.org/10.3390/su17062639 - 17 Mar 2025
Viewed by 1811
Abstract
Sustainable agriculture is central to addressing the difficulties farmers face, such as a lack of manpower, high input prices, and environmental effects from the widespread use of chemical herbicides. In farming, eliminating unwanted plants from crops is a laborious task crucial for enhancing [...] Read more.
Sustainable agriculture is central to addressing the difficulties farmers face, such as a lack of manpower, high input prices, and environmental effects from the widespread use of chemical herbicides. In farming, eliminating unwanted plants from crops is a laborious task crucial for enhancing sustainable crop yield. Traditionally, this process is carried out manually globally, utilizing tools such as wheel hoes, sickles, chris, powers, shovels, and hand forks. However, this manual approach is time-consuming, demanding in terms of labor, and imposes significant physiological strain, leading to premature operator fatigue. In response to this challenge, blade and tine-type push weeders were developed to enhance weeding efficiency for smallholder farmers. When blade and tine push weeders are pushed between the rows of crops, the front tine blade of the trolley efficiently uproots the weeds, while the straight blade at the back pushes the uprooted weeds. This dual-action mechanism ensures effective weed elimination by both uprooting and clearing the weeds without disturbing the crops. The blade and tine-type push weeders demonstrated actual and theoretical field capacities of 0.020 ha/h and 0.026 ha/h, achieving a commendable field efficiency of 85%. The weeders exhibited a cutting width ranging from 30 to 50 mm, a cutting depth between 250 and 270 mm, a draft of 1.8 kg, a weeding efficiency of 78%, and a plant damage rate of 2.7%. The cost of weeding was 2108 INR/ha for the green pea crop. Full article
Show Figures

Figure 1

18 pages, 1716 KB  
Article
Investigating the Potential of Latent Space for the Classification of Paint Defects
by Doaa Almhaithawi, Alessandro Bellini, Georgios C. Chasparis and Tania Cerquitelli
J. Imaging 2025, 11(2), 33; https://doi.org/10.3390/jimaging11020033 - 24 Jan 2025
Viewed by 1464
Abstract
Defect detection methods have greatly assisted human operators in various fields, from textiles to surfaces and mechanical components, by facilitating decision-making processes and reducing visual fatigue. This area of research is widely recognized as a cross-industry concern, particularly in the manufacturing sector. Nevertheless, [...] Read more.
Defect detection methods have greatly assisted human operators in various fields, from textiles to surfaces and mechanical components, by facilitating decision-making processes and reducing visual fatigue. This area of research is widely recognized as a cross-industry concern, particularly in the manufacturing sector. Nevertheless, each specific application brings unique challenges that require tailored solutions. This paper presents a novel framework for leveraging latent space representations in defect detection tasks, focusing on improving explainability while maintaining accuracy. This work delves into how latent spaces can be utilized by integrating unsupervised and supervised analyses. We propose a hybrid methodology that not only identifies known defects but also provides a mechanism for detecting anomalies and dynamically adapting to new defect types. This dual approach supports human operators, reducing manual workload and enhancing interpretability. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

16 pages, 2612 KB  
Article
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Viewed by 1362
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible [...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

20 pages, 4712 KB  
Article
CCE-UNet: Forest and Water Body Coverage Detection Method Based on Deep Learning: A Case Study in Australia’s Nattai National Forest
by Bangjun Huang, Xiaomei Yi, Lufeng Mo, Guoying Wang and Peng Wu
Forests 2024, 15(11), 2050; https://doi.org/10.3390/f15112050 - 20 Nov 2024
Viewed by 1018
Abstract
Severe forest fires caused by extremely high temperatures have resulted in devastating disasters in the natural forest reserves of New South Wales, Australia. Traditional forest research methods primarily rely on manual field surveys, which have limited generalization capabilities. In order to monitor forest [...] Read more.
Severe forest fires caused by extremely high temperatures have resulted in devastating disasters in the natural forest reserves of New South Wales, Australia. Traditional forest research methods primarily rely on manual field surveys, which have limited generalization capabilities. In order to monitor forest ecosystems more comprehensively and maintain the stability of the regional forest ecosystem, as well as to monitor post-disaster ecological restoration efforts, this study employed high-resolution remote sensing imagery and proposed a semantic segmentation architecture named CCE-UNet. This architecture focuses on the precise identification of forest coverage while simultaneously monitoring the distribution of water resources in the area. This architecture utilizes the Contextual Information Fusion Module (CIFM) and introduces the dual attention mechanism strategy to effectively filter background information and enhance image edge features. Meanwhile, it employs a multi-scale feature fusion algorithm to maximize the retention of image details and depth information, achieving precise segmentation of forests and water bodies. We have also trained seven semantic segmentation models as candidates. Experimental results show that the CCE-UNet architecture achieves the best performance, demonstrating optimal performance in forest and water body segmentation tasks, with the MIoU reaching 91.07% and the MPA reaching 95.15%. This study provides strong technical support for the detection of forest and water body coverage in the region and is conducive to the monitoring and protection of the forest ecosystem. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

25 pages, 12684 KB  
Article
Research on Behavior Recognition and Online Monitoring System for Liaoning Cashmere Goats Based on Deep Learning
by Geng Chen, Zhiyu Yuan, Xinhui Luo, Jinxin Liang and Chunxin Wang
Animals 2024, 14(22), 3197; https://doi.org/10.3390/ani14223197 - 7 Nov 2024
Cited by 5 | Viewed by 1835
Abstract
Liaoning Cashmere Goats are a high-quality dual-purpose breed valued for both their cashmere and meat. They are also a key national genetic resource for the protection of livestock and poultry in China, with their intensive farming model currently taking shape. Leveraging new productivity [...] Read more.
Liaoning Cashmere Goats are a high-quality dual-purpose breed valued for both their cashmere and meat. They are also a key national genetic resource for the protection of livestock and poultry in China, with their intensive farming model currently taking shape. Leveraging new productivity advantages and reducing labor costs are urgent issues for intensive breeding. Recognizing goatbehavior in large-scale intelligent breeding not only improves health monitoring and saves labor, but also improves welfare standards by providing management insights. Traditional methods of goat behavior detection are inefficient and prone to cause stress in goats. Therefore, the development of a convenient and rapid detection method is crucial for the efficiency and quality improvement of the industry. This study introduces a deep learning-based behavior recognition and online detection system for Liaoning Cashmere Goats. We compared the convergence speed and detection accuracy of the two-stage algorithm Faster R-CNN and the one-stage algorithm YOLO in behavior recognition tasks. YOLOv8n demonstrated superior performance, converging within 50 epochs with an average accuracy of 95.31%, making it a baseline for further improvements. We improved YOLOv8n through dataset expansion, algorithm lightweighting, attention mechanism integration, and loss function optimization. Our improved model achieved the highest detection accuracy of 98.11% compared to other state-of-the-art (SOTA) target detection algorithms. The Liaoning Cashmere Goat Online Behavior Detection System demonstrated real-time detection capabilities, with a relatively low error rate compared to manual video review, and can effectively replace manual labor for online behavior detection. This study introduces detection algorithms and develops the Liaoning Cashmere Goat Online Behavior Detection System, offering an effective solution for intelligent goat management. Full article
(This article belongs to the Section Small Ruminants)
Show Figures

Figure 1

Back to TopTop