Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,116)

Search Parameters:
Keywords = visualized sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 19394 KB  
Article
High-Resolution Mapping of Thermal Effluents in Inland Streams and Coastal Seas Using UAV-Based Thermal Infrared Imagery
by Sunyang Baek, Junhyeok Jung and Hyung-Sup Jung
Remote Sens. 2026, 18(8), 1121; https://doi.org/10.3390/rs18081121 (registering DOI) - 9 Apr 2026
Abstract
Monitoring thermal effluent is critical for assessing aquatic ecosystem health, yet traditional satellite remote sensing and in situ point measurements often fail to capture fine-scale thermal dynamics in narrow streams and complex coastal areas due to spatiotemporal resolution limitations. This study establishes a [...] Read more.
Monitoring thermal effluent is critical for assessing aquatic ecosystem health, yet traditional satellite remote sensing and in situ point measurements often fail to capture fine-scale thermal dynamics in narrow streams and complex coastal areas due to spatiotemporal resolution limitations. This study establishes a high-precision surface water temperature mapping protocol using a low-cost Unmanned Aerial Vehicle (UAV) equipped with an uncooled thermal infrared sensor (FLIR Vue Pro R) to overcome these observational gaps. We investigated two distinct hydrological environments—an inland stream and a coastal sea—to provide initial evidence for the applicability of an in situ-based linear regression calibration model across contrasting aquatic settings. The initial uncalibrated radiometric temperatures exhibited significant bias errors reaching up to 9.2 °C in the stream and 9.4 °C in the coastal area, primarily driven by atmospheric attenuation and environmental factors. However, the proposed calibration method dramatically reduced these discrepancies, achieving Root Mean Square Errors (RMSE) of 0.43 °C and 0.42 °C, respectively, with high determination coefficients (R2 > 0.87). The derived high-resolution thermal maps successfully visualized the detailed diffusion patterns of thermal plumes, revealing a steep temperature gradient of approximately 13 °C in the stream discharge zone and a distinct 5 °C elevation in the coastal effluent area relative to the ambient water. These findings demonstrate that UAV-based thermal remote sensing, when coupled with a rigorous radiometric calibration strategy, can serve as a cost-effective and reliable tool for environmental monitoring, bridging the critical scale gap between local point measurements and regional satellite observations. Full article
(This article belongs to the Section Engineering Remote Sensing)
19 pages, 623 KB  
Article
A Unified AI-Driven Multimodal Framework Integrating Visual Sensing and Wearable Sensors for Robust Human Motion Monitoring in Biomedical Applications
by Qiang Chen, Xiaoya Wang, Ranran Chen, Surui Hua, Yufei Li, Siyuan Liu and Yan Zhan
Sensors 2026, 26(8), 2314; https://doi.org/10.3390/s26082314 - 9 Apr 2026
Abstract
This study proposes a unified multimodal temporal motion state perception framework for optical imaging-oriented biomedical applications, integrating visual skeleton sequences, inertial measurement unit (IMU) signals, and surface electromyography (EMG) signals. The framework utilizes modality-specific encoders and a cross-modal temporal alignment attention mechanism to [...] Read more.
This study proposes a unified multimodal temporal motion state perception framework for optical imaging-oriented biomedical applications, integrating visual skeleton sequences, inertial measurement unit (IMU) signals, and surface electromyography (EMG) signals. The framework utilizes modality-specific encoders and a cross-modal temporal alignment attention mechanism to explicitly model temporal offsets from heterogeneous sensing streams. A multimodal temporal Transformer backbone is introduced to capture long-range motion dependencies and cross-modal interactions, while an uncertainty-aware fusion module dynamically allocates weights based on modality confidence. Experimental results demonstrate that the proposed approach achieves an accuracy of 94.37%, an F1-score of 93.95%, and a mean average precision of 96.02%, outperforming mainstream baseline models. Robustness evaluations further confirm stable performance under visual occlusion and sensor noise. These results indicate that the framework provides a highly accurate and robust solution for rehabilitation assessment, sports training monitoring, and wearable intelligent interaction systems. Full article
(This article belongs to the Special Issue Application of Optical Imaging in Medical and Biomedical Research)
Show Figures

Figure 1

26 pages, 5800 KB  
Article
Agentic AI-Based IoT Precision Agriculture Framework—Our Vision and Challenges
by Danco Davcev, Slobodan Kalajdziski, Ivica Dimitrovski, Ivan Kitanovski and Kosta Mitreski
AgriEngineering 2026, 8(4), 147; https://doi.org/10.3390/agriengineering8040147 - 9 Apr 2026
Abstract
Accurate, timely, and resource-efficient decision-making is critical for sustainable precision agriculture. This paper proposes an agentic AI-based Internet of Things (IoT) framework that enables coordinated, closed-loop perception–decision–action processes across heterogeneous sensing and actuation components. The framework models agricultural systems as distributed collections of [...] Read more.
Accurate, timely, and resource-efficient decision-making is critical for sustainable precision agriculture. This paper proposes an agentic AI-based Internet of Things (IoT) framework that enables coordinated, closed-loop perception–decision–action processes across heterogeneous sensing and actuation components. The framework models agricultural systems as distributed collections of goal-driven agents responsible for multimodal sensing, uncertainty-aware reasoning, and adaptive decision-making. To provide a structured foundation, the proposed architecture is formalized within a Multi-Agent Partially Observable Markov Decision Process (MPOMDP) perspective, enabling systematic treatment of coordination, uncertainty, and decision policies. The framework integrates multimodal information sources, including vision-based perception and environmental sensing, and defines mechanisms for their fusion and use in system-level decision-making. A proof-of-concept instantiation is presented using publicly available datasets, combining visual perception models and tabular reasoning models within the proposed agentic workflow. The experiments are designed to demonstrate the feasibility, modularity, and coordination capabilities of the framework, rather than to benchmark predictive performance or provide field-validated evaluation. The results illustrate how multimodal information can be integrated to support adaptive and resource-aware decision processes. Finally, the paper discusses key challenges and outlines directions for future work, including real-world deployment, integration with physical actuation systems, and validation under operational conditions. Full article
(This article belongs to the Special Issue The Future of Artificial Intelligence in Agriculture, 2nd Edition)
Show Figures

Figure 1

25 pages, 4570 KB  
Article
Digital Twin Framework for Struvctural Health Monitoring of Transmission Towers: Integrating BIM, IoT and FEM for Wind–Flood Multi-Hazard Simulation
by Xiaoqing Qi, Huaichao Wang, Xiaoyu Xiong, Anqi Zhou, Qing Sun and Qiang Zhang
Appl. Sci. 2026, 16(8), 3620; https://doi.org/10.3390/app16083620 - 8 Apr 2026
Abstract
Transmission towers, as critical infrastructure in power systems, are frequently threatened by multiple hazards such as strong winds and flood scour. Traditional structural health monitoring methods face limitations in data feedback timeliness and mechanical interpretation, making real-time condition awareness and early warning under [...] Read more.
Transmission towers, as critical infrastructure in power systems, are frequently threatened by multiple hazards such as strong winds and flood scour. Traditional structural health monitoring methods face limitations in data feedback timeliness and mechanical interpretation, making real-time condition awareness and early warning under disaster scenarios challenging. To address these issues, this paper proposes a digital twin framework for transmission tower structures, integrating Building Information Modeling (BIM), Internet of Things (IoT) technology, and the Finite Element Method (FEM) for structural health monitoring and visual warning under wind loads and flood scour effects. The framework achieves cross-platform collaboration through the FEM Open Application Programming Interface (OAPI) and Python scripts. In the physical domain, fluctuating wind loads are simulated based on the Davenport spectrum, flood scour depth is modeled using the HEC-18 formulation, and foundation constraint degradation is represented through nonlinear spring stiffness reduction. In the FEM domain, dynamic time-history analyses are conducted to obtain structural responses. In the BIM domain, a three-level warning mechanism based on stress change rate (ΔR) is established to achieve intuitive rendering and dynamic feedback of structural damage. A 44.4 m high latticed angle steel tower is employed as the case study for validation. Results demonstrate that the simulated wind spectrum closely matches the theoretical target spectrum, confirming the validity of the load input. A critical scour evolution threshold of 40% is identified, beyond which the first two natural frequencies exhibit nonlinear decay with a maximum reduction of 80.9%. Non-uniform scour induces significant load transfer, with axial forces at leeside nodes increasing from 27 kN to 54 kN. During the 0–60 s wind loading process, BIM visualization accurately captures the full stress evolution from the tower base to the upper structure, showing excellent agreement with FEM results. The proposed framework establishes a closed-loop interaction mechanism of “physical sensing–digital simulation–visual warning”, effectively enhancing the timeliness and interpretability of structural health monitoring for transmission towers under multiple hazards, providing an innovative approach for intelligent disaster prevention in power infrastructure. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

30 pages, 28721 KB  
Article
Dual-Arm Robotic Textile Unfolding with Depth-Corrected Perception and Fold Resolution
by Tilla Egerhei Båserud, Joakim Johansen, Ajit Jha and Ilya Tyapin
Robotics 2026, 15(4), 78; https://doi.org/10.3390/robotics15040078 - 8 Apr 2026
Abstract
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a [...] Read more.
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a dual-arm robotic manipulation framework. The system uses two Interbotix WidowX 250s 6-DoF robotic arms and an Intel RealSense L515 LiDAR camera for visual perception. The unfolding process consists of three stages: initial dual-arm stretching to reduce major folds, refinement through a second stretch targeting the lower region, and a machine-learning stage that employs a YOLOv11 framework trained on depth-encoded textile images, followed by a depth-gradient-based estimator for fold direction. The system applies an extremity-based grasping strategy that selects leftmost and rightmost textile points from a custom error-corrected depth map, enabling robust grasp point selection, and a fold direction estimation method based on depth gradients around the detected fold. The most confident fold region is selected, an unfolding direction is determined using depth ranking, and the textile is manipulated until a flat state is confirmed through depth uniformity. Experiments show that depth correction significantly reduces spatial error in the robot frame, while segmentation and extremity detection achieve high accuracy across varied fold configurations, and the YOLOv11n-based model reaches 98.8% classification accuracy, while fold direction is estimated correctly in 87% of test cases. By enabling robust, largely autonomous textile unfolding, the system demonstrates a practical approach that could support safer and more efficient automated textile recycling workflows. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

18 pages, 835 KB  
Article
Prism-Based Mapping of 6G Use Cases Integrating Technical Requirements and Multidimensional Service Classification
by Sunhye Kim, Yoon Seo, Seung-Hoon Hwang and Byungun Yoon
Systems 2026, 14(4), 404; https://doi.org/10.3390/systems14040404 - 7 Apr 2026
Viewed by 12
Abstract
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic [...] Read more.
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic and institutional 6G sources into 21 representative cases. A three-round Delphi-based expert assessment, employing a five-point Likert scale and interquartile-range-based consensus monitoring, was used to assign primary and secondary technical requirements across six core dimensions: immersive communication, massive communication, hyper-reliable low-latency communication, integrated sensing and communication, integrated artificial intelligence and communication (IAAC), and ubiquitous connectivity. A three-dimensional (3D) prism-based visualization framework was subsequently developed to represent the interdependencies among these requirements. Results: IAAC and massive communication emerged as the most critical requirements, each functioning as a primary or secondary driver across most use cases. The prism framework revealed hierarchical and complementary relationships among the six dimensions that conventional 2D wheel diagrams cannot adequately capture. Furthermore, a nine-criterion multidimensional classification framework, encompassing data transmission mode, decision-making mode, communication flow, interaction type, device type, deployment type, human activity innovation, user type, and personalization level, was developed, offering industry-specific guidance for service design. Collectively, the proposed framework supports user-centric design, informs strategic technology planning, and contributes to policy development while acknowledging existing limitations in quantitative mapping and economic analysis. Full article
Show Figures

Figure 1

13 pages, 744 KB  
Entry
Spatiotemporal Data Science
by Chaowei Yang, Anusha Srirenganathan Malarvizhi, Manzhu Yu, Qunying Huang, Lingbo Liu, Zifu Wang, Daniel Q. Duffy, Siqin Wang, Seren Smith, Shuming Bao and Nan Ding
Encyclopedia 2026, 6(4), 84; https://doi.org/10.3390/encyclopedia6040084 - 6 Apr 2026
Viewed by 203
Definition
The world evolves continuously across space and time. Massive volumes of data are generated through sensing, simulation, remote observation, and human activities, capturing dynamic processes in environmental, social, economic, and engineered systems. Critical insights are embedded within these large-scale spatiotemporal datasets. Spatiotemporal Data [...] Read more.
The world evolves continuously across space and time. Massive volumes of data are generated through sensing, simulation, remote observation, and human activities, capturing dynamic processes in environmental, social, economic, and engineered systems. Critical insights are embedded within these large-scale spatiotemporal datasets. Spatiotemporal Data Science provides a conceptual and methodological framework for analyzing such data by integrating spatiotemporal thinking, computational infrastructure, artificial intelligence, and domain knowledge. The field advances methods for data acquisition, harmonization, modeling, visualization, and decision support, enabling applications in natural disaster response, public health, climate adaptation, infrastructure resilience, and geopolitical analysis. By leveraging emerging technologies—including generative Artificial Intelligence (AI), large-scale cloud platforms, Graphics Processing Unit (GPU) acceleration, and digital twin systems—Spatiotemporal Data Science enables scalable, interoperable, and solution-oriented research and innovation. It represents a critical frontier for scientific discovery, engineering advancement, technological innovation, education, and societal benefit. Spatiotemporal Data Science is a transdisciplinary field that studies and models dynamic phenomena across space and time by integrating spatial theory, temporal reasoning, artificial intelligence, and scalable computational infrastructure. It enables the development of adaptive, predictive, and increasingly autonomous systems for understanding and managing complex real-world processes. Full article
(This article belongs to the Collection Data Science)
Show Figures

Figure 1

37 pages, 17270 KB  
Article
An Intelligent Gated Fusion Network for Waterbody Recognition in Multispectral Remote Sensing Imagery
by Tong Zhao, Chuanxun Hou, Zhili Zhang and Zhaofa Zhou
Remote Sens. 2026, 18(7), 1088; https://doi.org/10.3390/rs18071088 - 4 Apr 2026
Viewed by 195
Abstract
Accurate water body segmentation from multispectral remote sensing imagery is critical for hydrological monitoring and environmental management. However, leveraging transfer learning with pre-trained models remains challenging due to the dimensional mismatch between three-channel RGB-based architectures and multi-band spectral data. To address this, this [...] Read more.
Accurate water body segmentation from multispectral remote sensing imagery is critical for hydrological monitoring and environmental management. However, leveraging transfer learning with pre-trained models remains challenging due to the dimensional mismatch between three-channel RGB-based architectures and multi-band spectral data. To address this, this study proposes a novel segmentation network, termed Intelligent Gated Fusion Network (IGF-Net), built upon a dual-branch feature encoder module and a core Intelligent Gated Fusion Module (IGFM). The IGFM achieves adaptive fusion of visual and spectral features through a cascaded mechanism integrating differences-and-commonalities parallel modeling, channel-context priors, and adaptive temperature control. We evaluate IGF-Net on the newly constructed Tiangong-2 remote sensing image water body semantic segmentation dataset, which comprises 3776 meticulously annotated multispectral image patches. Comprehensive experiments demonstrate that IGF-Net achieves strong and consistent performance on this dataset, with an Intersection over Union of 0.8742 and a Dice coefficient of 0.9239, consistently outperforming the evaluated baseline methods, such as FCN, U-Net, and DeepLabv3+. It also exhibits strong cross-dataset generalization capabilities on an independent Sentinel-2 water segmentation dataset. Ablation studies and visualization analyses confirm that the proposed fusion strategy significantly enhances segmentation accuracy and stability, particularly in complex scenarios. Placeholder. Full article
(This article belongs to the Topic Advances in Hydrological Remote Sensing)
Show Figures

Figure 1

16 pages, 2595 KB  
Article
Drone Rider: Effects of Wind Conditions on the Sense of Flight
by Hanyi Yang, Shogo Okamoto and Hong Shen
Appl. Sci. 2026, 16(7), 3544; https://doi.org/10.3390/app16073544 - 4 Apr 2026
Viewed by 205
Abstract
Recent advances in extended reality (XR) have enabled immersive virtual flight experiences for applications such as entertainment and teleoperation support. However, XR-based flight systems that rely primarily on audiovisual cues often fail to evoke a compelling sense of flight and embodied sensation. This [...] Read more.
Recent advances in extended reality (XR) have enabled immersive virtual flight experiences for applications such as entertainment and teleoperation support. However, XR-based flight systems that rely primarily on audiovisual cues often fail to evoke a compelling sense of flight and embodied sensation. This study investigates how adaptive wind feedback enhances subjective flight perception in a virtual flight simulation system, Drone Rider. We implemented direction- and velocity-adaptive wind feedback that synchronizes airflow intensity and direction with the user’s motion in the virtual environment, focusing on perceptual effects in a controlled manner to identify key design factors, rather than reproducing aerodynamically accurate airflow. To explore flexible system configurations, two fan installation positions were compared: front-mounted and bottom-mounted. A questionnaire-based user study revealed that adaptive wind feedback significantly enhanced the sense of flight, self-location, and agency compared with the constant-wind and no-wind conditions. However, no significant differences were observed between velocity-adaptive wind and direction- and velocity-adaptive wind conditions. Furthermore, wind delivered from beneath the user yielded flight sensations comparable to those generated by front-mounted airflow. These findings suggest that temporal coupling between airflow intensity and visual motion plays a central role in XR flight perception and provide practical design insights for immersive and flexible XR-based flight simulation systems. Full article
Show Figures

Figure 1

31 pages, 6459 KB  
Article
Cooperative Hybrid Domain Network for Salient Object Detection in Optical Remote Sensing Images
by Yi Gu, Jianhang Zhou and Lelei Yan
Remote Sens. 2026, 18(7), 1087; https://doi.org/10.3390/rs18071087 - 4 Apr 2026
Viewed by 167
Abstract
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and [...] Read more.
Salient Object Detection (SOD) in Optical Remote Sensing Images (ORSIs) aims to localize and segment visually prominent objects amidst complex backgrounds and extreme scale variations. However, we observe that current frequency-aware methods typically rely on a naive feature aggregation paradigm, merging frequency and spatial features via simple concatenation, addition, or direct combination. This shallow interaction overlooks the inherent semantic misalignment between the two domains, resulting in feature redundancy and poor boundary delineation. To address this limitation, we propose the Cooperative Hybrid Domain Network (CHDNet), a framework designed to facilitate synergistic cooperation between heterogeneous domains. Specifically, we propose the Cross-Domain Multi-Head Self-Attention (CD-MHSA) mechanism as a semantic bridge following the encoder. It employs a dimension expansion strategy to construct a Unified Interaction Manifold and utilizes a Frequency Anchor Interaction mechanism to achieve precise modulation of spatial textures using global spectral cues. Furthermore, to address the dual challenges of lacking explicit interpretation mechanisms for semantic co-occurrence and the susceptibility of topological structures to fracture in complex scenes during the decoding phase, we design a Multi-Branch Cooperative Decoder (MBCD) comprising three parallel paths: edge semantics, global relations, and reverse correction. This module dynamically integrates these heterogeneous clues through a Cooperative Fusion Strategy, combining explicit global dependency modeling with dual-domain reverse mining. Extensive experiments on multiple benchmark datasets demonstrate that the proposed CHDNet achieves performance superior to state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

15 pages, 1608 KB  
Article
Early Detection and Differentiation of Dragon Fruit Plant Diseases Using Optical Spectral Reflectance
by Priyanka Belbase and Maruthi Sridhar Balaji Bhaskar
Appl. Sci. 2026, 16(7), 3480; https://doi.org/10.3390/app16073480 - 2 Apr 2026
Viewed by 321
Abstract
Dragon fruit (Hylocereus spp.) is an emerging crop in the tropics and subtropics, but its production is increasingly threatened by diseases that reduce yield and profitability. Early diagnosis of these diseases is crucial for timely intervention, yet visual symptoms often appear only [...] Read more.
Dragon fruit (Hylocereus spp.) is an emerging crop in the tropics and subtropics, but its production is increasingly threatened by diseases that reduce yield and profitability. Early diagnosis of these diseases is crucial for timely intervention, yet visual symptoms often appear only after significant infection has occurred. The study aims to evaluate how optical spectral reflectance can detect dragon fruit diseases and identify the most responsive spectral regions. In this study, six major dragon fruit stem diseases: Neoscytalidium stem canker, stem sunburn, anthracnose, Botryosphaeria stem canker, Bipolaris stem rot, and bacterial soft rot were characterized by the goal of identifying unique spectral signatures for early detection and differentiation of each disease. Seventy-two potted dragon fruit plants of three distinct species were grown under four organic vermicompost treatments (0, 5, 10, 20 tons/acre) in both open-field and high-tunnel conditions together, in a randomized complete block design. A handheld spectroradiometer (350–2500 nm) was used to collect reflectance from the diseased and healthy cladodes (stem segment). Various spectral vegetative indices were computed to identify disease-specific features. The results revealed distinct spectral features for each disease. Infected cladodes consistently exhibited higher reflectance especially in the visible region (400–700 nm) and the near-infrared region (900–2500 nm) of the spectrum than healthy cladodes. The Normalized Difference Vegetative Index (NDVI), Green Normalized Difference Vegetative Index (GNDVI), and Spectral Ratio (SR) spectral indices were significantly higher in healthy plants than in diseased ones, reflecting higher chlorophyll concentration and plant biomass. Conversely, the 1110/810 ratio was lower in healthy plants than in diseased plants, suggesting a more compact internal plant structure. Statistical analysis revealed highly significant differences (p < 0.00001) between healthy and diseased spectra in the Red, Green and NIR regions. Linear Discriminant Analysis(LDA) achieved the highest classification accuracy (OA = 0.642, κ = 0.488), though performance was limited for minority classes. These findings demonstrate that targeted spectral sensing can identify dragon fruit diseases before obvious symptoms emerge. By pinpointing disease-specific spectral indices, our study paves the way for early-warning tools such as targeted multispectral sensors or drone-based imaging that would enable growers to intervene sooner and limit losses. These results highlight the potential for development of UAV-based or portable spectral sensors for large-scale, near real-time disease monitoring in dragon fruit production. Full article
Show Figures

Figure 1

33 pages, 2402 KB  
Review
Toward Advanced Sensing and Data-Driven Approaches for Maturity Assessment of Indeterminate Peanut Cropping Systems: Review of Current State and Prospects
by Sathish Raymond Emmanuel Sahayaraj, Abhilash K. Chandel, Pius Jjagwe, Ranadheer Reddy Vennam, Maria Balota and Arunachalam Manimozhian
Sensors 2026, 26(7), 2208; https://doi.org/10.3390/s26072208 - 2 Apr 2026
Viewed by 418
Abstract
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. [...] Read more.
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. As a result, pod development and maturation are asynchronous, making harvest timing particularly challenging. Conventional maturity estimation techniques, including the hull scrape method, pod blasting, and visual maturity profiling, are invasive, labor-intensive, time-consuming, and spatially limited. Moreover, differences in cultivar maturity rates and agroclimatic conditions exacerbate inconsistencies in maturity prediction. These challenges highlight the urgent need for scalable, objective, and data-driven methods to support growers in achieving optimal harvest outcomes. This review synthesizes the current understanding of peanut pod maturity and evaluates existing traditional and non-invasive approaches for maturity estimation. It aims to identify the limitations of conventional techniques and explore the integration of advanced sensing technologies, artificial intelligence (AI), and geospatial analytics to enhance precision and scalability in peanut maturity assessment and harvest decision-making. This review examines traditional destructive techniques such as the hull scrape method and pod blasting, followed by emerging non-invasive methods employing proximal and remote sensing platforms. Applications of vegetation indices, multispectral and hyperspectral imaging, and AI-based data analytics are discussed in the context of maturity prediction. Additionally, the potential of multimodal remote sensing data fusion and digital frameworks integrating spatial big data analytics, centralized data management, and cloud-based graphical interfaces is explored as a pathway toward end-to-end decision-support systems. Recent advances in non-invasive sensing and AI-assisted modeling have demonstrated significant improvements in scalability, precision, and automation compared with traditional manual approaches. However, their effectiveness remains constrained by the limited inclusion of agroclimatic, phenological, and cultivar-specific variables. Furthermore, the translation of model outputs into actionable, field-level harvest decisions is still underdeveloped, underscoring the need for integrated, user-centric digital infrastructure. Achieving a robust and transferable digital peanut maturity estimation system will require comprehensive ground-truth data across cultivars, regions, and growing seasons. Multidisciplinary collaborations among agronomists, data scientists, growers, and technology providers will be essential for developing practical, field-ready solutions. Integrating AI, multimodal sensing, and geospatial analytics holds immense potential to transform peanut maturity estimation. Such innovations promise to enhance harvest precision, economic returns, and sustainability while reducing manual effort and uncertainty, ultimately improving the efficiency and quality of life for peanut producers worldwide. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2026)
Show Figures

Figure 1

26 pages, 8175 KB  
Article
In Situ Damage Detection Method for Metallic Shear Plate Dampers Based on the Active Sensing Method and Machine Learning Algorithms
by Yunfei Li, Feng Xiong, Hong Liu, Xiongfei Li, Huanlong Ding, Yi Liao and Yi Zeng
Sensors 2026, 26(7), 2203; https://doi.org/10.3390/s26072203 - 2 Apr 2026
Viewed by 248
Abstract
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes [...] Read more.
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes a novel MSPD damage detection method based on active sensing and the k-nearest neighbor (KNN) algorithm, featuring high accuracy, efficiency, and low cost. Quasi-static tests were conducted to simulate various damage states. Sweep-frequency excitation was applied using a charge amplifier, and piezoelectric sensors were employed to generate and receive stress wave signals corresponding to different damage conditions. The acquired signals were processed using wavelet packet transform (WPT) and energy spectrum analysis to extract discriminative time–frequency features, which were used to train and validate the KNN model. Results show that the model achieved a validation accuracy of 98.9% using all valid data and 98.1% using a single excitation-sensing channel. When tested on an MSPD with a similar overall structure but lacking stiffeners, the model achieved an accuracy of 92.6% in distinguishing between healthy and damaged states. This indicates that the proposed method has good robustness and practical potential for MSPDs with similar damage evolution and failure modes despite certain structural variations. Full article
Show Figures

Figure 1

32 pages, 9172 KB  
Article
Design, Modeling, Self-Calibration and Grasping Method for Modular Cable-Driven Parallel Robots
by Wanlin Mai, Yonghe Wang, Zhiquan Yang, Bin Zhu, Lin Liu and Jianqing Peng
Sensors 2026, 26(7), 2204; https://doi.org/10.3390/s26072204 - 2 Apr 2026
Viewed by 199
Abstract
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and [...] Read more.
Cable-driven parallel robots (CDPRs) are attractive for large-space manipulation because of their lightweight structure, large workspace, and reconfigurability. However, existing systems still face three practical challenges: limited modularity of the mechanical architecture, repeated calibration after reconfiguration, and insufficient integration between visual perception and grasp execution. To address these issues, this paper presents a modular cable-driven parallel robot (MCDPR), together with its kinematic modeling, vision-based self-calibration, and visual grasping methods. First, a modular mechanical architecture is developed in which the drive, sensing, and cable-guiding functions are integrated to support rapid assembly/disassembly, convenient debugging, and cable anti-slack operation. Second, a pulley-considered multilayer kinematic model is established, and a vision-based self-calibration method is proposed to identify the structural parameters after assembly using onboard sensing and AprilTag observations, thereby reducing the number of recalibrations required during robot operation after reconfiguration. Third, a vision-guided bin-picking method is developed by combining RGB-D perception, coordinate transformation, and the calibrated robot model. Simulation and prototype experiments are conducted to validate the proposed system. A software/hardware combined validation framework is established, in which the CoppeliaSim-based simulation and the hardware prototype are used together to verify the proposed design and methods. In simulation, self-calibration reduces the Euclidean grasping position error from 0.371 mm to 0.048 mm and the orientation error from 0.071° to 0.004°. In experiments, the relative position error is reduced by 58.33% after self-calibration. Full article
(This article belongs to the Special Issue Motor Control and Remote Handling in Robotic Applications)
Show Figures

Figure 1

33 pages, 16801 KB  
Article
A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots
by Huaiyang Liu, Haiyang Gu, Yong Wang, Tianjiao Zhong, Tong Tian and Changxing Geng
AI 2026, 7(4), 125; https://doi.org/10.3390/ai7040125 - 1 Apr 2026
Viewed by 282
Abstract
Autonomous navigation is essential for orchard transportation robots to support automated operations and precision orchard management. However, in trellis orchards, dense vegetation and complex canopy structures often degrade the stability of GNSS-based navigation in in-row environments. To address this issue, this study proposes [...] Read more.
Autonomous navigation is essential for orchard transportation robots to support automated operations and precision orchard management. However, in trellis orchards, dense vegetation and complex canopy structures often degrade the stability of GNSS-based navigation in in-row environments. To address this issue, this study proposes a GNSS–vision integrated navigation framework for orchard transportation robots. The performance of GNSS-based navigation in out-of-row environments and vision-based navigation in in-row environments was experimentally evaluated under representative orchard operating conditions. In out-of-row areas, the robot employs GNSS-based path planning and trajectory tracking to achieve reliable navigation in relatively open, lightly occluded environments. During in-row navigation, a deep learning-based real-time object detection approach is used to detect tree trunks and trellis supporting structures. By integrating corner-point selection with temporal RANSAC-based line fitting, a stable orchard row structure is constructed to generate robust navigation references. The visual perception module serves as the front-end sensing component of the navigation system and is designed to be independent of specific object detection architectures, allowing flexible integration with different real-time detection models. Field experiments were conducted under various orchard layouts and growth stages. The average lateral deviation of GNSS-based navigation in out-of-row scenarios ranged from 0.093 to 0.221 m, while the average heading deviation of in-row visual navigation was approximately 5.23° at a robot speed of 0.6 m/s. These results indicate that the proposed perception and navigation methods can maintain stable navigation performance within their respective applicable scenarios in trellis orchard environments. The experimental findings provide a practical and engineering-oriented basis for future research on automatic navigation mode switching and system-level integration of orchard transportation robots. Full article
Show Figures

Figure 1

Back to TopTop