Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = detection and predication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1176 KB  
Article
FedLTN-CubeSat: Neuro-Symbolic Federated Learning for Intrusion Detection in LEO CubeSat Constellations
by Gang Yang, Lin Ni, Junfeng Geng and Xiang Peng
Mathematics 2026, 14(6), 1047; https://doi.org/10.3390/math14061047 - 20 Mar 2026
Cited by 1 | Viewed by 261
Abstract
Low Earth Orbit (LEO) mega-constellations are becoming the backbone of global communications, yet their cybersecurity remains critically under-addressed. Intrusion detection systems (IDSs) for such constellations face a unique trilemma of accuracy, efficiency, and interpretability under extreme SWaP-C (size, weight, power, and cost) constraints. [...] Read more.
Low Earth Orbit (LEO) mega-constellations are becoming the backbone of global communications, yet their cybersecurity remains critically under-addressed. Intrusion detection systems (IDSs) for such constellations face a unique trilemma of accuracy, efficiency, and interpretability under extreme SWaP-C (size, weight, power, and cost) constraints. We present FedLTN-CubeSat (FedLTN refers to Federated Logic Tensor Networks), a neuro-symbolic federated learning framework for intrusion detection in LEO CubeSat constellations. The framework first employs a lightweight spatio-temporal separable perception encoder to efficiently extract features from telemetry and IQ data, designed to operate within the computational budgets of resource-constrained on-board processors. These features feed into a differentiable first-order logic layer based on Logic Tensor Networks, which incorporates domain knowledge as logical axioms to guide learning and enhance interpretability. To enable collaborative learning across a constellation, FedLTN-CubeSat introduces an intra-orbit symbolic federated learning mechanism that aggregates only the logic-layer parameters via inter-satellite links, drastically reducing communication overhead while preserving data privacy. Furthermore, an orbit-adaptive predicate migration module transfers learned rules across different orbital configurations with minimal supervision, facilitating rapid deployment. We evaluate on two benchmarks: the CuCD-ID dataset (NASA NOS3 telemetry) and the STIN dataset (satellite-terrestrial integrated networks). FedLTN-CubeSat achieves 0.98 F1-score on CuCD-ID and 0.96 accuracy on STIN—significantly outperforming prior federated learning baselines (7% improvement) while incurring a minimal daily communication load per satellite. The framework also outputs interpretable decision traces grounded in logical axioms, enabling operators to understand and validate detections. Logical constraints improve detection of unseen attack variants by 25% over pure neural baselines. Full article
(This article belongs to the Special Issue New Advances in Network Security and Data Privacy)
Show Figures

Figure 1

24 pages, 2114 KB  
Article
An Integrated Framework for Automated Identification of Workers’ Safety Violation Based on Knowledge Graph
by Yifan Zhu, Yewei Ouyang, Rui Pan, Zhanhui Sun, Yang Zhou, Rui Ma, Baoquan Cheng and Wen Wang
Buildings 2026, 16(5), 1037; https://doi.org/10.3390/buildings16051037 - 6 Mar 2026
Viewed by 323
Abstract
Automatic identification of worker safety violations can substantially strengthen construction-site safety management by enabling continuous, real-time monitoring. Although recent advances have made automated detection feasible, many existing systems still suffer from poor adaptability and limited extensibility. To address these limitations, this study proposes [...] Read more.
Automatic identification of worker safety violations can substantially strengthen construction-site safety management by enabling continuous, real-time monitoring. Although recent advances have made automated detection feasible, many existing systems still suffer from poor adaptability and limited extensibility. To address these limitations, this study proposes an integrated, knowledge graph-based framework for automatic identification of workers’ safety violations. The framework comprises two principal components: (1) a knowledge graph construction module that encodes domain knowledge (safety regulations, task–hazard relationships, and contextual constraints) into a machine-readable graph structure and (2) a graph-enabled violation identification module that maps structured scene descriptions of worker and environmental states to the knowledge graph and performs semantic inference to detect violations. In this study, these structured scene descriptions are manually specified and simulated as subject–predicate–object triplets; integration with raw sensing data is left for future work. For validation, we construct a knowledge graph containing 1200 safety rules and evaluate the violation identification module on 500 annotated examples representing realistic worker scenarios. Using this curated knowledge graph and structured inputs, the proposed approach achieves an identification accuracy of 97.6% for unsafe worker behaviors. Experimental analysis shows that the knowledge graph representation substantially improves the system’s expandability and interpretability compared with traditional hard-coded rules, facilitating easier incorporation of new rules and multimodal sensing inputs. The results indicate that knowledge graph-driven reasoning offers a practical, scalable pathway for robust, context-aware safety violation detection in varied construction environments. Full article
Show Figures

Figure 1

27 pages, 2619 KB  
Article
Defamiliarization Attack: Literary Theory Enabled Discussion of LLM Safety
by Bibin Babu, Iana Agafonova, Sebastian Biedermann and Ivan Yamshchikov
Electronics 2026, 15(5), 1047; https://doi.org/10.3390/electronics15051047 - 2 Mar 2026
Viewed by 672
Abstract
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, [...] Read more.
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, from planning ethically dubious actions to selectively overlooking critical events in literary texts, thereby exposing the limitations of alignment strategies predicated on detecting trigger words or semantic cues. Rather than substituting vocabulary, defamiliarization manipulates context and presentation, highlighting vulnerabilities that cannot be addressed by token-level fixes alone. Beyond demonstrating the effectiveness of defamiliarization as an attack strategy, evidence is presented of a systematic relationship between model scale and susceptibility. Experiments reveal that smaller-parameter models are significantly easier to manipulate using defamiliarized prompts. This finding raises important concerns regarding the growing popularity of lightweight, locally hosted LLMs, which are favored for their lower computational requirements but may lack alignment safeguards. A more holistic approach to LLM safety is advocated—one that incorporates insights from literary theory, ethics, and user experience—treating these models as interpretive agents. By doing so, defenses against covert manipulations can be strengthened and AI systems can remain aligned with human values. Full article
Show Figures

Figure 1

15 pages, 2002 KB  
Review
Muscle Fatigue in Dynamic Movement: Limitations and Challenges, Experimental Design, and New Research Horizons
by Natalia Daniel, Jerzy Małachowski, Kamil Sybilski and Michalina Błażkiewicz
Bioengineering 2026, 13(2), 248; https://doi.org/10.3390/bioengineering13020248 - 20 Feb 2026
Viewed by 734
Abstract
Research on muscle fatigue during dynamic movement using surface electromyography (sEMG) constitutes a significant challenge within biomechanics. Despite a degree of standardization, measurements and their resultant findings continue to attract considerable debate, attributable to factors such as skin impedance, perspiration, and electrode displacement, [...] Read more.
Research on muscle fatigue during dynamic movement using surface electromyography (sEMG) constitutes a significant challenge within biomechanics. Despite a degree of standardization, measurements and their resultant findings continue to attract considerable debate, attributable to factors such as skin impedance, perspiration, and electrode displacement, as well as subjective fatigue perception. Further questions remain regarding signal normalization and the selection of appropriate analytical methodologies. Recent years have witnessed notable progress in dynamic fatigue research, highlighting the limitations of classical metrics (e.g., EMG Median Frequency) and introducing time–frequency methods, such as the wavelet transform (WT), which are better equipped to handle signal non-stationarity. Interest has also expanded to include non-linear metrics (e.g., entropy) and the analysis of multiple signals (EMG, accelerometers, fNIRS, EEG). The inherent complexity of conducting studies under conditions that approximate real-world sporting disciplines requires the consideration of the influence of various confounding factors. The judicious selection of relevant physical activities and the rigorous validation of the measurement apparatus are paramount for the accurate execution of the calculations. Current research is substantially predicated on artificial intelligence (AI) algorithms. The synergistic application of AI with wavelet transform, particularly in the decomposition and extraction of EMG signals, demonstrates efficacy in fatigue detection. Nevertheless, the full realization of these potential mandates requires further investigation into system generalization, the integration of data from multiple sensors, and the standardization of protocols, coupled with the establishment of publicly accessible datasets. This article delineates selected guidelines and challenges pertinent to the planning and execution of research on muscle fatigue in dynamic movement, focusing on activity selection, equipment validation, EMG signal analysis, and AI utilization. Full article
Show Figures

Figure 1

21 pages, 5645 KB  
Article
Design of a Nano-Refractive Index Sensor Based on a MIM Waveguide Coupled with a Cat-Faced Resonator for Temperature Detection and Biosensing Applications
by Jianhong Zheng, Shubin Yan, Chen Chen, Kecheng Ding, Yang Cui and Taiquan Wu
Sensors 2026, 26(3), 826; https://doi.org/10.3390/s26030826 - 26 Jan 2026
Viewed by 446
Abstract
This study introduces an innovative sensor architecture predicated on surface plasmon polaritons (SPPs), comprising a metal–insulator–metal (MIM) waveguide in conjunction with a cat-faced circular split resonator (TCRSW). The efficacy of the proposed nanosensor was meticulously evaluated utilizing the finite element method (FEM). It [...] Read more.
This study introduces an innovative sensor architecture predicated on surface plasmon polaritons (SPPs), comprising a metal–insulator–metal (MIM) waveguide in conjunction with a cat-faced circular split resonator (TCRSW). The efficacy of the proposed nanosensor was meticulously evaluated utilizing the finite element method (FEM). It was determined that the TCRSW configuration significantly impacts the sensor’s performance. By means of a comprehensive optimization of the structural parameters, the sensor attained an apex sensitivity of 3380 nm/RIU and a figure of merit (FOM) of 56.33 in its optimal configuration. Furthermore, the study comprehensively evaluated the sensor’s applicability for temperature sensing, demonstrating a measured temperature sensitivity of 1.673 nm/°C. Meanwhile, the application of the proposed structure in biosensing was comprehensively evaluated. When employed as a concentration sensor for detecting sodium and potassium ion solutions, the maximum achievable sensitivities reached 0.49 mg·d/L and 0.6375 mg·d/L, respectively, which highlights its significant potential not only for high-precision temperature monitoring but also for sensitive and reliable biosensing applications. Additionally, the proposed nanosensor holds considerable promise for applications in other nanophotonic fields. Full article
(This article belongs to the Section Nanosensors)
Show Figures

Figure 1

12 pages, 2172 KB  
Article
Instance Segmentation Method for Insulators in Complex Backgrounds Based on Improved SOLOv2
by Ze Chen, Yangpeng Ji, Xiaodong Du, Shaokang Zhao, Zhenfei Huo and Xia Fang
Sensors 2025, 25(17), 5318; https://doi.org/10.3390/s25175318 - 27 Aug 2025
Cited by 1 | Viewed by 1125
Abstract
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a [...] Read more.
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a preprocessed edge channel, generated through the Non-Subsampled Contourlet Transform (NSCT), which augments the model’s capability to accurately capture the edges of insulators. Moreover, the input image resolution to the network is heightened to 1200 × 1600, permitting more detailed extraction of edges. Rather than the original ResNet + FPN architecture, the improved HRNet is utilized as the backbone to effectively harness multi-scale feature information, thereby enhancing the model’s overall efficacy. In response to the increased input size, there is a reduction in the network’s channel count, concurrent with an increase in the number of layers, ensuring an adequate receptive field without substantially escalating network parameters. Additionally, a Convolutional Block Attention Module (CBAM) is incorporated to refine mask quality and augment object detection precision. Furthermore, to bolster the model’s robustness and minimize annotation demands, a virtual dataset is crafted utilizing the fourth-generation Unreal Engine (UE4). Empirical results reveal that the proposed framework exhibits superior performance, with AP0.50 (90.21%), AP0.75 (83.34%), and AP[0.50:0.95] (67.26%) on a test set consisting of images supplied by the power grid. This framework surpasses existing methodologies and contributes significantly to the advancement of intelligent transmission line inspection. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Intelligent Fault Diagnostics)
Show Figures

Figure 1

28 pages, 2766 KB  
Article
Parameter Analysis of Pile Foundation Bearing Characteristics Based on Pore Water Pressure Using Rapid Load Test
by Jing-Jie Su, Xue-Liang Zhao, Qing Guo, Wei-Ming Gong, Yu-Chen Wang and Tong-Xing Zeng
Infrastructures 2025, 10(7), 159; https://doi.org/10.3390/infrastructures10070159 - 26 Jun 2025
Viewed by 908
Abstract
A novel approach for determining the bearing capacity of pile foundations using rapid load testing is suggested to rectify the inaccuracies arising from the presumption of a constant damping coefficient and excess pore water pressure during the evaluation of pile foundation bearing capacity [...] Read more.
A novel approach for determining the bearing capacity of pile foundations using rapid load testing is suggested to rectify the inaccuracies arising from the presumption of a constant damping coefficient and excess pore water pressure during the evaluation of pile foundation bearing capacity in soil. This research focuses on the characteristics associated with the segmented damping coefficient of pile foundations and the permeability coefficient of sand at the pile terminus, resulting in a long pulse vibration equation derived from dynamic effects. A numerical model incorporating the damping coefficient and permeability coefficient is developed, yielding the time history features of load, displacement, and acceleration. The findings indicate that (1) the long pulse vibration equation, predicated on dynamic effects, aligns more closely with the actual bearing capacity of pile foundations than traditional detection theory; (2) in the rapid load test method, the maximum load applied to sand pile foundations occurs prior to peak displacement, while the ultimate bearing capacity, after accounting for inertial forces, corresponds to the maximum displacement value; (3) the permeability coefficient significantly influences the ultra-static pore water pressure, and the testing error regarding the bearing capacity of low permeability sand pile foundations using the rapid loading method is elevated. Full article
Show Figures

Figure 1

15 pages, 11293 KB  
Article
An Assessment of the Stereo and Near-Infrared Camera Calibration Technique Using a Novel Real-Time Approach in the Context of Resource Efficiency
by Larisa Ivascu, Vlad-Florin Vinatu and Mihail Gaianu
Processes 2025, 13(4), 1198; https://doi.org/10.3390/pr13041198 - 15 Apr 2025
Viewed by 1676
Abstract
This paper provides a comparative analysis of calibration techniques applicable to stereo and near-infrared (NIR) camera systems, with a specific emphasis on the Intel RealSense SR300 alongside a standard 2-megapixel NIR camera. This study investigates the pivotal function of calibration within both stereo [...] Read more.
This paper provides a comparative analysis of calibration techniques applicable to stereo and near-infrared (NIR) camera systems, with a specific emphasis on the Intel RealSense SR300 alongside a standard 2-megapixel NIR camera. This study investigates the pivotal function of calibration within both stereo vision and NIR imaging applications, which are essential across various domains, including robotics, augmented reality, and low-light imaging. For stereo systems, we scrutinise the conventional method involving a 9 × 6 chessboard pattern utilised to ascertain the intrinsic and extrinsic camera parameters. The proposed methodology consists of three main steps: (1) real-time calibration error classification for stereo cameras, (2) NIR-specific calibration techniques, and (3) a comprehensive evaluation framework. This research introduces a novel real-time evaluation methodology that classifies calibration errors predicated on the pixel offsets between corresponding points in the left and right images. Conversely, NIR camera calibration techniques are modified to address the distinctive properties of near-infrared light. We deliberate on the difficulties encountered in devising NIR–visible calibration patterns and the imperative to consider the spectral response and temperature sensitivity within the calibration procedure. The paper also puts forth an innovative calibration assessment application that is relevant to both systems. Stereo cameras evaluate the corner detection accuracy in real time across multiple image pairs, whereas NIR cameras concentrate on assessing the distortion correction and intrinsic parameter accuracy under varying lighting conditions. Our experiments validate the necessity of routine calibration assessment, as environmental factors may compromise the calibration quality over time. We conclude by underscoring the disparities in the calibration requirements between stereo and NIR systems, thereby emphasising the need for specialised approaches tailored to each domain to guarantee an optimal performance in their respective applications. Full article
(This article belongs to the Special Issue Circular Economy and Efficient Use of Resources (Volume II))
Show Figures

Figure 1

22 pages, 1021 KB  
Article
State-Based Fault Diagnosis of Finite-State Vector Discrete-Event Systems via Integer Linear Programming
by Qinrui Chen, Mubariz Garayev and Ding Liu
Sensors 2025, 25(5), 1452; https://doi.org/10.3390/s25051452 - 27 Feb 2025
Viewed by 1191
Abstract
This paper presents a state-based method to address the verification of K-diagnosability and fault diagnosis of a finite-state vector discrete-event system (Vector DES) with partially observable state outputs due to limited sensors. Vector DES models consist of an arithmetic additive structure in [...] Read more.
This paper presents a state-based method to address the verification of K-diagnosability and fault diagnosis of a finite-state vector discrete-event system (Vector DES) with partially observable state outputs due to limited sensors. Vector DES models consist of an arithmetic additive structure in both the state space and state transition function. This work offers a necessary and sufficient condition for verifying the K-diagnosability of a finite-state Vector DES based on state sensor outputs, employing integer linear programming and the mathematical representation of a Vector DES. Predicates are employed to diagnose faults in a Vector DES online. Specifically, we use three different kinds of predicates to divide system state outputs into different subsets, and the fault occurrence in a system is detected by checking a subset of outputs. Online diagnosis is achieved via solving integer linear programming problems. The conclusions obtained in this work are explained by means of several examples. Full article
Show Figures

Figure 1

14 pages, 252 KB  
Article
Impossibility Results for Byzantine-Tolerant State Observation, Synchronization, and Graph Computation Problems
by Ajay D. Kshemkalyani and Anshuman Misra
Algorithms 2025, 18(1), 26; https://doi.org/10.3390/a18010026 - 5 Jan 2025
Cited by 1 | Viewed by 1397
Abstract
This paper considers the solvability of several fundamental problems in asynchronous message-passing distributed systems in the presence of Byzantine processes using distributed algorithms. These problems are the following: mutual exclusion, global snapshot recording, termination detection, deadlock detection, predicate detection, causal ordering, spanning tree [...] Read more.
This paper considers the solvability of several fundamental problems in asynchronous message-passing distributed systems in the presence of Byzantine processes using distributed algorithms. These problems are the following: mutual exclusion, global snapshot recording, termination detection, deadlock detection, predicate detection, causal ordering, spanning tree construction, minimum spanning tree construction, all–all shortest paths computation, and maximal independent set computation. In a distributed algorithm, each process has access only to its local variables and incident edge parameters. We show the impossibility of solving these fundamental problems by proving that they require a solution to the causality determination problem which has been shown to be unsolvable in asynchronous message-passing distributed systems. Full article
(This article belongs to the Special Issue Graph Theory and Algorithmic Applications: Theoretical Developments)
20 pages, 1470 KB  
Article
Automatic Optical Path Alignment Method for Optical Biological Microscope
by Guojin Peng, Zhenming Yu, Xinjian Zhou, Guangyao Pang and Kuikui Wang
Sensors 2025, 25(1), 102; https://doi.org/10.3390/s25010102 - 27 Dec 2024
Viewed by 2468
Abstract
A high-quality optical path alignment is essential for achieving superior image quality in optical biological microscope (OBM) systems. The traditional automatic alignment methods for OBMs rely heavily on complex masker-detection techniques. This paper introduces an innovative, image-sensor-based optical path alignment approach designed for [...] Read more.
A high-quality optical path alignment is essential for achieving superior image quality in optical biological microscope (OBM) systems. The traditional automatic alignment methods for OBMs rely heavily on complex masker-detection techniques. This paper introduces an innovative, image-sensor-based optical path alignment approach designed for low-power objective (specifically 4×) automatic OBMs. The proposed method encompasses reference objective (RO) identification and alignment processes. For identification, a model depicting spot movement with objective rotation near the optical axis is developed, elucidating the influence of optical path parameters on spot characteristics. This insight leads to the proposal of an RO identification method utilizing an edge gradient and edge position probability. In the alignment phase, a symmetry-based weight distribution scheme for concentric arcs is introduced. A significant observation is that the received energy stabilizes with improved alignment precision, prompting the design of an advanced alignment evaluation method that surpasses conventional energy-based assessments. The experimental results confirm that the proposed RO identification method can effectively differentiate between 4× and 10× objectives across diverse light intensities and exposure levels, with a significant numerical difference of up to 100. The error–radius ratio of the weighted circular fitting method is maintained below 1.16%, and the fine alignment stage’s evaluation curve is notably sharper. Moreover, tests under various imaging conditions in artificially saturated environments indicate that the alignment estimation method, predicated on critical saturation positions, achieves an average error of 0.875 pixels. Full article
Show Figures

Figure 1

17 pages, 3169 KB  
Article
Knowledge Reasoning- and Progressive Distillation-Integrated Detection of Electrical Construction Violations
by Bin Ma, Gang Liang, Yufei Rao, Wei Guo, Wenjie Zheng and Qianming Wang
Sensors 2024, 24(24), 8216; https://doi.org/10.3390/s24248216 - 23 Dec 2024
Cited by 1 | Viewed by 1263
Abstract
To address the difficulty in detecting workers’ violation behaviors in electric power construction scenarios, this paper proposes an innovative method that integrates knowledge reasoning and progressive multi-level distillation techniques. First, standards, norms, and guidelines in the field of electric power construction are collected [...] Read more.
To address the difficulty in detecting workers’ violation behaviors in electric power construction scenarios, this paper proposes an innovative method that integrates knowledge reasoning and progressive multi-level distillation techniques. First, standards, norms, and guidelines in the field of electric power construction are collected to build a comprehensive knowledge graph, aiming to provide accurate knowledge representation and normative analysis. Then, the knowledge graph is combined with the object-detection model in the form of triplets, where detected objects and their interactions are represented as subject–predicate–object relationship. These triplets are embedded into the model using an adaptive connection network, which dynamically weights the relevance of external knowledge to enhance detection accuracy. Furthermore, to enhance the model’s performance, the paper designs a progressive multi-level distillation strategy. On one hand, knowledge transfer is conducted at the object level, region level, and global level, significantly reducing the loss of contextual information during distillation. On the other hand, two teacher models of different scales are introduced, employing a two-stage distillation strategy where the advanced teacher guides the primary teacher in the first stage, and the primary teacher subsequently distills this knowledge to the student model in the second stage, effectively bridging the scale differences between the teacher and student models. Experimental results demonstrate that under the proposed method, the model size is reduced from 14.5 MB to 3.8 MB, and the floating-point operations (FLOPs) are reduced from 15.8 GFLOPs to 5.9 GFLOPs. Despite these optimizations, the AP50 reaches 92.4%, showing a 1.8% improvement compared to the original model. These results highlight the method’s effectiveness in accurately detecting workers’ violation behaviors, providing a quantitative basis for its superiority and offering a novel approach for safety management and monitoring at construction sites. Full article
Show Figures

Figure 1

18 pages, 4350 KB  
Article
Using Anchor-Free Object Detectors to Detect Surface Defects
by Jiaxue Liu, Chao Zhang and Jianjun Li
Processes 2024, 12(12), 2817; https://doi.org/10.3390/pr12122817 - 9 Dec 2024
Cited by 5 | Viewed by 3392
Abstract
Due to the numerous disadvantages that come with having anchors in the detection process, a lot of researchers have been concentrating on the design of object detectors that do not rely on anchors. In this work, we use anchor-free object detectors in the [...] Read more.
Due to the numerous disadvantages that come with having anchors in the detection process, a lot of researchers have been concentrating on the design of object detectors that do not rely on anchors. In this work, we use anchor-free object detectors in the field of computer vision for surface defect detection. First, we constructed a surface defect detection dataset about real wind turbine blades, which was supplemented with several methods due to the lack of natural data. Next, we used a number of popular anchor-free detectors (CenterNet, FCOS, YOLOX-S, and YOLOV8-S) to detect surface defects in this blade dataset. After experimental comparison, YOLOV8-S demonstrated the best detection performance, with a high accuracy (79.55%) and a short detection speed (9.52 fps). All the upcoming experiments are predicated on it. Third, we examined how the attention mechanism added to various YOLOV8-S model positions affected the two datasets—our blade dataset and the NEU dataset—and discovered that the insertion methods on the two datasets are the same when focusing on comprehensive performance. Lastly, we carried out a significant amount of experimental comparisons. Full article
(This article belongs to the Section Automation Control Systems)
Show Figures

Figure 1

26 pages, 1388 KB  
Review
Research Progress on Methods for Improving the Stability of Non-Destructive Testing of Agricultural Product Quality
by Sai Xu, Hanting Wang, Xin Liang and Huazhong Lu
Foods 2024, 13(23), 3917; https://doi.org/10.3390/foods13233917 - 4 Dec 2024
Cited by 22 | Viewed by 6843
Abstract
Non-destructive testing (NDT) technology is pivotal in the quality assessment of agricultural products. In contrast to traditional manual testing, which is fraught with subjectivity, inefficiency, and the potential for sample damage, NDT technology has gained widespread application due to its advantages of objectivity, [...] Read more.
Non-destructive testing (NDT) technology is pivotal in the quality assessment of agricultural products. In contrast to traditional manual testing, which is fraught with subjectivity, inefficiency, and the potential for sample damage, NDT technology has gained widespread application due to its advantages of objectivity, speed, and accuracy, and it has injected significant momentum into the intelligent development of the food industry and agriculture. Over the years, technological advancements have led to the development of NDT systems predicated on machine vision, spectral analysis, and bionic sensors. However, during practical application, these systems can be compromised by external environmental factors, the test samples themselves, or by the degradation and noise interference inherent in the testing equipment, leading to instability in the detection process. This instability severely impacts the accuracy and efficiency of the testing. Consequently, refining the detection methods and enhancing system stability have emerged as key focal points for research endeavors. This manuscript presents an overview of various prevalent non-destructive testing methodologies, summarizes how sample properties, external environments, and instrumentation factors affect the stability of testing in practical applications, organizes and analyzes solutions to enhance the stability of non-destructive testing of agricultural product quality based on current research, and offers recommendations for future investigations into the non-destructive testing technology of agricultural products. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

24 pages, 1855 KB  
Article
Most Random-Encounter-Model Density Estimates in Camera-Based Predator–Prey Studies Are Unreliable
by Sean M. Murphy, Benjamin S. Nolan, Felicia C. Chen, Kathleen M. Longshore, Matthew T. Simes, Gabrielle A. Berry and Todd C. Esque
Animals 2024, 14(23), 3361; https://doi.org/10.3390/ani14233361 - 22 Nov 2024
Cited by 1 | Viewed by 4343
Abstract
Identifying population-level relationships between predators and their prey is often predicated on having reliable population estimates. Camera-trapping is effective for surveying terrestrial wildlife, but many species lack individually unique natural markings that are required for most abundance and density estimation methods. Analytical approaches [...] Read more.
Identifying population-level relationships between predators and their prey is often predicated on having reliable population estimates. Camera-trapping is effective for surveying terrestrial wildlife, but many species lack individually unique natural markings that are required for most abundance and density estimation methods. Analytical approaches have been developed for producing population estimates from camera-trap surveys of unmarked wildlife; however, most unmarked approaches have strict assumptions that can be cryptically violated by survey design characteristics, practitioner choice of input values, or species behavior and ecology. Using multi-year datasets from populations of an unmarked predator and its co-occurring unmarked prey, we evaluated the consequences of violating two requirements of the random encounter model (REM), one of the first developed unmarked methods. We also performed a systematic review of published REM studies, with an emphasis on predator–prey ecology studies. Empirical data analysis confirmed findings of recent research that using detections from non-randomly placed cameras (e.g., on trails) and/or borrowing movement velocity (day range) values caused volatility in density estimates. Notably, placing cameras strategically to detect the predator, as is often required to obtain sufficient sample sizes, resulted in substantial density estimate inflation for both the predator and prey species. Systematic review revealed that 91% of REM density estimates in published predator–prey ecology studies were obtained using camera-trap data or velocity values that did not meet REM requirements. We suggest considerable caution making conservation or management decisions using REM density estimates from predator–prey ecology studies. Full article
(This article belongs to the Special Issue Recent Advances and Innovation in Wildlife Population Estimation)
Show Figures

Figure 1

Back to TopTop