Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = aircraft and ship detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 839 KB  
Article
ISAR Image Quality Assessment Based on Visual Attention Model
by Jun Zhang, Zhicheng Zhao and Xilan Tian
Appl. Sci. 2025, 15(4), 1996; https://doi.org/10.3390/app15041996 - 14 Feb 2025
Viewed by 671
Abstract
The quality of ISAR (Inverse Synthetic Aperture Radar) images has a significant impact on the detection and recognition of targets. Therefore, ISAR image quality assessment is a fundamental prerequisite and primary link in the utilization of ISAR images. Previous ISAR image quality assessment [...] Read more.
The quality of ISAR (Inverse Synthetic Aperture Radar) images has a significant impact on the detection and recognition of targets. Therefore, ISAR image quality assessment is a fundamental prerequisite and primary link in the utilization of ISAR images. Previous ISAR image quality assessment methods typically extract hand-crafted features or use simple multi-layer networks to extract local features. Hand-crafted features and local features from networks usually lack the global information of ISAR images. Furthermore, most deep neural networks obtain feature representations by abridging the prediction quality score and the ground truth, neglecting to explore the strong correlations between features and quality scores in the stage of feature extraction. This study proposes a Gramin Transformer to explore the similarity and diversity of features extracted from different images, thus obtaining features containing quality-related information. The Gramin matrix of features is computed to obtain the score token through the self-attention layer. It prompts the network to learn more discriminative features, which are closely associated with quality scores. Despite the Transformer architecture’s ability to extract global information, the Channel Attention Block (CAB) can capture complementary information from different channels in an image, aggregating and mining information from these channels to provide a more comprehensive evaluation of ISAR images. ISAR images are formed from target scattering points with a background containing substantial silent noise, and the Inter-Region Attention Block (IRAB) is utilized to extract local scattering point features, which decide the clarity of target. In addition, extensive experiments are conducted on the ISAR image dataset (including space stations, ships, aircraft, etc.). The evaluation results of our method on the dataset are significantly superior to those of traditional feature extraction methods and existing image quality assessment methods. Full article
Show Figures

Figure 1

19 pages, 4425 KB  
Technical Note
CM-YOLO: Typical Object Detection Method in Remote Sensing Cloud and Mist Scene Images
by Jianming Hu, Yangyu Wei, Wenbin Chen, Xiyang Zhi and Wei Zhang
Remote Sens. 2025, 17(1), 125; https://doi.org/10.3390/rs17010125 - 2 Jan 2025
Cited by 15 | Viewed by 1718
Abstract
Remote sensing target detection technology in cloud and mist scenes is of great significance for applications such as marine safety monitoring and airport traffic management. However, the degradation and loss of features caused by the obstruction of cloud and mist elements still pose [...] Read more.
Remote sensing target detection technology in cloud and mist scenes is of great significance for applications such as marine safety monitoring and airport traffic management. However, the degradation and loss of features caused by the obstruction of cloud and mist elements still pose a challenging problem for this technology. To enhance object detection performance in adverse weather conditions, we propose a novel target detection method named CM-YOLO that integrates background suppression and semantic context mining, which can achieve accurate detection of targets under different cloud and mist conditions. Specifically, a component-decoupling-based background suppression (CDBS) module is proposed, which extracts cloud and mist components based on characteristic priors and effectively enhances the contrast between the target and the environmental background through a background subtraction strategy. Moreover, a local-global semantic joint mining (LGSJM) module is utilized, which combines convolutional neural networks (CNNs) and hierarchical selective attention to comprehensively mine global and local semantics, achieving target feature enhancement. Finally, the experimental results on multiple public datasets indicate that the proposed method realizes state-of-the-art performance compared to six advanced detectors, with mAP, precision, and recall indicators reaching 85.5%, 89.4%, and 77.9%, respectively. Full article
Show Figures

Figure 1

31 pages, 9112 KB  
Article
Intelligent Target Detection in Synthetic Aperture Radar Images Based on Multi-Level Fusion
by Qiaoyu Liu, Ziqi Ye, Chenxiang Zhu, Dongxu Ouyang, Dandan Gu and Haipeng Wang
Remote Sens. 2025, 17(1), 112; https://doi.org/10.3390/rs17010112 - 1 Jan 2025
Viewed by 1734
Abstract
Due to the unique imaging mechanism of SAR, targets in SAR images present complex scattering characteristics. As a result, intelligent target detection in SAR images has been facing many challenges, which mainly lie in the insufficient exploitation of target characteristics, inefficient characterization of [...] Read more.
Due to the unique imaging mechanism of SAR, targets in SAR images present complex scattering characteristics. As a result, intelligent target detection in SAR images has been facing many challenges, which mainly lie in the insufficient exploitation of target characteristics, inefficient characterization of scattering features, and inadequate reliability of decision models. In this respect, we propose an intelligent target detection method based on multi-level fusion, where pixel-level, feature-level, and decision-level fusions are designed for enhancing scattering feature mining and improving the reliability of decision making. The pixel-level fusion method through the channel fusion of original images and their features after scattering feature enhancement represents an initial exploration of image fusion. Two feature-level fusion methods are conducted using respective migratable fusion blocks, namely DBAM and FDRM, presenting higher-level fusion. Decision-level fusion based on DST can not only consolidate complementary strengths in different models but also incorporate human or expert involvement in proposition for guiding effective decision making. This represents the highest-level fusion integrating results by proposition setting and statistical analysis. Experiments of different fusion methods integrating different features were conducted on typical target detection datasets. As shown in the results, the proposed method increases the mAP by 16.52%, 7.1%, and 3.19% in ship, aircraft, and vehicle target detection, demonstrating high effectiveness and robustness. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition (Second Edition))
Show Figures

Figure 1

20 pages, 21698 KB  
Article
An Enhanced Aircraft Carrier Runway Detection Method Based on Image Dehazing
by Chenliang Li, Yunyang Wang, Yan Zhao, Cheng Yuan, Ruien Mao and Pin Lyu
Appl. Sci. 2024, 14(13), 5464; https://doi.org/10.3390/app14135464 - 24 Jun 2024
Cited by 1 | Viewed by 1306
Abstract
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may [...] Read more.
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may encounter complex weather conditions such as haze, which could lead to vision-based landing failures. This paper proposes a runway line recognition and localization method based on haze removal enhancement to solve this problem. Firstly, a haze removal algorithm using a multi-mechanism, multi-architecture network model is introduced. Compared with traditional algorithms, the proposed model not only consumes less GPU memory but also achieves superior image restoration results. Based on this, We employed the random sample consensus method to reduce the error in runway line localization. Additionally, extensive experiments conducted in the Airsim simulation environment have shown that our pipeline effectively addresses the issue of decreased detection accuracy of runway line detection algorithms in haze maritime conditions, improving the runway line localization accuracy by approximately 85%. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

25 pages, 13923 KB  
Article
CCDN-DETR: A Detection Transformer Based on Constrained Contrast Denoising for Multi-Class Synthetic Aperture Radar Object Detection
by Lei Zhang, Jiachun Zheng, Chaopeng Li, Zhiping Xu, Jiawen Yang, Qiuxin Wei and Xinyi Wu
Sensors 2024, 24(6), 1793; https://doi.org/10.3390/s24061793 - 11 Mar 2024
Cited by 7 | Viewed by 3563
Abstract
The effectiveness of the SAR object detection technique based on Convolutional Neural Networks (CNNs) has been widely proven, and it is increasingly used in the recognition of ship targets. Recently, efforts have been made to integrate transformer structures into SAR detectors to achieve [...] Read more.
The effectiveness of the SAR object detection technique based on Convolutional Neural Networks (CNNs) has been widely proven, and it is increasingly used in the recognition of ship targets. Recently, efforts have been made to integrate transformer structures into SAR detectors to achieve improved target localization. However, existing methods rarely design the transformer itself as a detector, failing to fully leverage the long-range modeling advantages of self-attention. Furthermore, there has been limited research into multi-class SAR target detection. To address these limitations, this study proposes a SAR detector named CCDN-DETR, which builds upon the framework of the detection transformer (DETR). To adapt to the multiscale characteristics of SAR data, cross-scale encoders were introduced to facilitate comprehensive information modeling and fusion across different scales. Simultaneously, we optimized the query selection scheme for the input decoder layers, employing IOU loss to assist in initializing object queries more effectively. Additionally, we introduced constrained contrastive denoising training at the decoder layers to enhance the model’s convergence speed and improve the detection of different categories of SAR targets. In the benchmark evaluation on a joint dataset composed of SSDD, HRSID, and SAR-AIRcraft datasets, CCDN-DETR achieves a mean Average Precision (mAP) of 91.9%. Furthermore, it demonstrates significant competitiveness with 83.7% mAP on the multi-class MSAR dataset compared to CNN-based models. Full article
(This article belongs to the Special Issue Target Detection and Classification Based on SAR)
Show Figures

Figure 1

15 pages, 3315 KB  
Article
Research into a Marine Helicopter Traction System and Its Dynamic Energy Consumption Characteristics
by Tuo Jia, Tucun Shao, Qian Liu, Pengcheng Yang, Zhinuo Li and Heng Zhang
Appl. Sci. 2023, 13(22), 12493; https://doi.org/10.3390/app132212493 - 19 Nov 2023
Cited by 1 | Viewed by 1773
Abstract
As countries attach great importance to the ocean-going navigation capability of ships, the energy consumption of shipborne equipment has attracted much attention. Although energy consumption analysis is a guiding method to improve energy efficiency, it often ignores the dynamic characteristics of the system. [...] Read more.
As countries attach great importance to the ocean-going navigation capability of ships, the energy consumption of shipborne equipment has attracted much attention. Although energy consumption analysis is a guiding method to improve energy efficiency, it often ignores the dynamic characteristics of the system. However, the traditional dynamic analysis method hardly considers the energy consumption characteristics of the system. In this paper, a new type of electric-driven helicopter traction system is designed based on the ASIST system. Combined with power bond graph theory, a system dynamic modeling method that considers both dynamic and energy consumption characteristics is proposed, and simulation analysis is carried out. The results indicate that the designed traction system in this study displays high responsiveness, robust, steady-state characteristics, and superior energy efficiency. When it engages with helicopter-borne aircraft, it swiftly transitions to a stable state within 0.2 s while preserving an efficient speed tracking effect under substantial load force, and no significant fluctuations are detected in the motor rotation rate or the helicopter movement velocity. Moreover, it presents a high energy utilization rate, achieving an impressive energy utilization rate of 84% per single working cycle. Simultaneously, the proposed modeling methodology is validated as sound and effective, particularly apt for the dynamic and power consumption analysis of marine complex machinery systems, guiding the high-efficiency design of the transmission system. Full article
Show Figures

Figure 1

17 pages, 19458 KB  
Technical Note
CamoNet: A Target Camouflage Network for Remote Sensing Images Based on Adversarial Attack
by Yue Zhou, Wanghan Jiang, Xue Jiang, Lin Chen and Xingzhao Liu
Remote Sens. 2023, 15(21), 5131; https://doi.org/10.3390/rs15215131 - 27 Oct 2023
Cited by 6 | Viewed by 2638
Abstract
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. [...] Read more.
Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs are fragile and can be easily fooled. There have been a series of studies on adversarial attacks for image classification in RSIs. However, the existing gradient attack algorithms designed for classification cannot achieve excellent performance when directly applied to object detection, which is an essential task in RSI understanding. Although we can find some works on adversarial attacks for object detection, they are weak in concealment and easily detected by the naked eye. To handle these problems, we propose a target camouflage network for object detection in RSIs, called CamoNet, to deceive CNN-based detectors by adding imperceptible perturbation to the image. In addition, we propose a detection space initialization strategy to maximize the diversity in the detector’s outputs among the generated samples. It can enhance the performance of the gradient attack algorithms in the object detection task. Moreover, a key pixel distillation module is employed, which can further reduce the modified pixels without weakening the concealment effect. Compared with several of the most advanced adversarial attacks, the proposed attack has advantages in terms of both peak signal-to-noise ratio (PSNR) and attack success rate. The transferability of the proposed target camouflage network is evaluated on three dominant detection algorithms (RetinaNet, Faster R-CNN, and RTMDet) with two commonly used remote sensing datasets (i.e., DOTA and DIOR). Full article
(This article belongs to the Special Issue Deep Learning in Optical Satellite Images)
Show Figures

Figure 1

17 pages, 13578 KB  
Article
MODAN: Multifocal Object Detection Associative Network for Maritime Horizon Surveillance
by Sungan Yoon, Ahmad Jalal and Jeongho Cho
J. Mar. Sci. Eng. 2023, 11(10), 1890; https://doi.org/10.3390/jmse11101890 - 28 Sep 2023
Cited by 28 | Viewed by 1879
Abstract
In maritime surveillance systems, object detection plays a crucial role in ensuring the security of nearby waters by tracking the movement of various objects, such as ships and aircrafts, that are found at sea, detecting illegal activities and preemptively countering or predicting potential [...] Read more.
In maritime surveillance systems, object detection plays a crucial role in ensuring the security of nearby waters by tracking the movement of various objects, such as ships and aircrafts, that are found at sea, detecting illegal activities and preemptively countering or predicting potential risks. Using vision sensors such as cameras to monitor the sea can help to identify the shape, size, and color of objects, enabling the precise analysis of maritime situations. Additionally, vision sensors can monitor or track small ships that may escape radar detection. However, objects located at considerable distances from vision sensors have low resolution and are small in size, rendering their detection difficult. This paper proposes a multifocal object detection associative network (MODAN) to overcome these vulnerabilities and provide stable maritime surveillance. First, it searches for the horizon using color quantization based on K-means; then, it selects and partitions the region of interest (ROI) around the horizon using the ROI selector. The original image and ROI image, converted to high resolution through the Super-Resolution Convolutional Neural Network (SRCNN), are then passed to the near-field and far-field detectors, respectively, for object detection. The weighted box fusion removes duplicate detected objects and estimates the optimal object. The proposed network is more stable and efficient in detecting distant objects than existing single-object detection models. Through performance evaluations, the proposed network exhibited an average precision surpassing that of the existing single-object detection models by more than 7%, and the false detection rate was reduced by 59% compared to similar multifocal-based state-of-the-art detection methods. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 7331 KB  
Article
Research on Spaceborne Target Detection Based on Yolov5 and Image Compression
by Qi Shi, Daheng Wang, Wen Chen, Jinpei Yu, Weiting Zhou, Jun Zou and Guangzu Liu
Future Internet 2023, 15(3), 114; https://doi.org/10.3390/fi15030114 - 19 Mar 2023
Cited by 3 | Viewed by 2068
Abstract
Satellite image compression technology plays an important role in the development of space science. As optical sensors on satellites become more sophisticated, high-resolution and high-fidelity satellite images will occupy more storage. This raises the required transmission bandwidth and transmission rate in the satellite–ground [...] Read more.
Satellite image compression technology plays an important role in the development of space science. As optical sensors on satellites become more sophisticated, high-resolution and high-fidelity satellite images will occupy more storage. This raises the required transmission bandwidth and transmission rate in the satellite–ground data transmission system. In order to reduce the pressure from image transmission on the data transmission system, a spaceborne target detection system based on Yolov5 and a satellite image compression transmission system is proposed in this paper. It can reduce the pressure on the data transmission system by detecting the object of interest and deciding whether to transmit. An improved Yolov5 network is proposed to detect the small target on the high-resolution satellite image. Simulation results show that the improved Yolov5 network proposed in this paper can detect specific targets in real satellite images, including aircraft, ships, etc. At the same time, image compression has little effect on target detection, so detection complexity can be effectively reduced and detection speed can be improved by detecting the compressed images. Full article
Show Figures

Figure 1

26 pages, 19634 KB  
Review
A Review of Laser Ultrasonic Lamb Wave Damage Detection Methods for Thin-Walled Structures
by Shanpu Zheng, Ying Luo, Chenguang Xu and Guidong Xu
Sensors 2023, 23(6), 3183; https://doi.org/10.3390/s23063183 - 16 Mar 2023
Cited by 24 | Viewed by 5321
Abstract
Thin-walled structures, like aircraft skins and ship shells, are often several meters in size but only a few millimeters thick. By utilizing the laser ultrasonic Lamb wave detection method (LU-LDM), signals can be detected over long distances without physical contact. Additionally, this technology [...] Read more.
Thin-walled structures, like aircraft skins and ship shells, are often several meters in size but only a few millimeters thick. By utilizing the laser ultrasonic Lamb wave detection method (LU-LDM), signals can be detected over long distances without physical contact. Additionally, this technology offers excellent flexibility in designing the measurement point distribution. The characteristics of LU-LDM are first analyzed in this review, specifically in terms of laser ultrasound and hardware configuration. Next, the methods are categorized based on three criteria: the quantity of collected wavefield data, the spectral domain, and the distribution of measurement points. The advantages and disadvantages of multiple methods are compared, and the suitable conditions for each method are summarized. Thirdly, we summarize four combined methods that balance detection efficiency and accuracy. Finally, several future development trends are suggested, and the current gaps and shortcomings in LU-LDM are highlighted. This review builds a comprehensive framework for LU-LDM for the first time, which is expected to serve as a technical reference for applying this technology in large, thin-walled structures. Full article
(This article belongs to the Special Issue Ultrasonic Imaging and Sensors II)
Show Figures

Figure 1

24 pages, 5822 KB  
Article
ANLPT: Self-Adaptive and Non-Local Patch-Tensor Model for Infrared Small Target Detection
by Zhao Zhang, Cheng Ding, Zhisheng Gao and Chunzhi Xie
Remote Sens. 2023, 15(4), 1021; https://doi.org/10.3390/rs15041021 - 12 Feb 2023
Cited by 21 | Viewed by 2685
Abstract
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural [...] Read more.
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural features of infrared images has attracted much attention among many algorithms because of its good interpretability. Based on our study, we found some shortcomings in existing baseline methods, such as redundancy of constructing tensors and fixed compromising factors. A self-adaptive low-rank sparse tensor decomposition model for infrared dim small target detection is proposed in this paper. In this model, the entropy of image block is used for fast matching of non-local similar blocks to construct a better sparse tensor for small targets. An adaptive strategy of low-rank sparse tensor decomposition is proposed for different background environments, which adaptively determines the weight coefficient to achieve effective separation of background and small targets in different background environments. Tensor robust principal component analysis (TRPCA) was applied to achieve low-rank sparse tensor decomposition to reconstruct small targets and their backgrounds separately. Sufficient experiments on the various types data sets show that the proposed method is competitive. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

16 pages, 5575 KB  
Article
An Improved Method for Ship Target Detection Based on YOLOv4
by Zexian Huang, Xiaonan Jiang, Fanlu Wu, Yao Fu, Yu Zhang, Tianjiao Fu and Junyan Pei
Appl. Sci. 2023, 13(3), 1302; https://doi.org/10.3390/app13031302 - 18 Jan 2023
Cited by 17 | Viewed by 3000
Abstract
The resolution of remote sensing images has increased with the maturation of satellite technology. Ship detection technology based on remote sensing images makes it possible to monitor a large range and far sea area, which can greatly enrich the monitoring means of maritime [...] Read more.
The resolution of remote sensing images has increased with the maturation of satellite technology. Ship detection technology based on remote sensing images makes it possible to monitor a large range and far sea area, which can greatly enrich the monitoring means of maritime departments. In this paper, we conducted research on small target detection and resistance to complex background interference. First, a ship dataset with four types of targets (aircraft carriers, warships, merchant ships and submarines) is constructed, and experiments are conducted on the dataset using the object detection algorithm YOLOv4. The Kmeans++ clustering algorithm is used for a priori frame selection, and the migration learning method is used to enhance the detection effect of the YOLOv4. Second, the model is improved to address the problems of missed detection of small ships and difficulty in resisting background interference: the RFB_s (Receptive Field Block) with dilated convolution is introduced instead of the SPP (Spatial Pyramid Pooling) to enlarge the receptive field and improve the detection of small targets; the attention mechanism CBAM (Convolutional Block Attention Module) is added to adjust the weights of different features to highlight salient features useful for ship detection task, which improve the detection performance of small ships and improve the model’s ability to resist complex background. Compared to YOLOv4, our proposed model achieved a large improvement in mAP (mean Average Precision) from 77.66% to 91.40%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

63 pages, 40347 KB  
Article
Societal Applications of HF Skywave Radar
by Stuart Anderson
Remote Sens. 2022, 14(24), 6287; https://doi.org/10.3390/rs14246287 - 12 Dec 2022
Cited by 1 | Viewed by 3179
Abstract
After exploratory research in the 1950s, HF skywave ‘over-the-horizon’ radars (OTHR) were developed as operating systems in the 1960s for defence missions, notably the long-range detection of ballistic missiles, aircraft, and ships. The potential for a variety of non-defence applications soon became apparent, [...] Read more.
After exploratory research in the 1950s, HF skywave ‘over-the-horizon’ radars (OTHR) were developed as operating systems in the 1960s for defence missions, notably the long-range detection of ballistic missiles, aircraft, and ships. The potential for a variety of non-defence applications soon became apparent, but the size, cost, siting requirements, and tasking priority hindered the implementation of these societal roles. A sister technology—HF surface wave radar (HFSWR)—evolved during the same period but, in this more compact form, the non-defence applications dominated, with hundreds of such radars presently deployed around the world, used primarily for ocean current mapping and wave measurements. In this paper, we examine the ocean monitoring capabilities of the latest generation of HF skywave radars, some shared with HFSWR, some unique to the skywave modality, and explore some new possibilities, along with selected technical details for their implementation. We apply state-of-the-art modelling and experimental data to illustrate the kinds of information that can be generated and exploited for civil, commercial, and scientific purposes. The examples treated confirm the relevance and value of this information to such diverse activities as shipping, fishing, offshore resource extraction, agriculture, communications, weather forecasting, and climate change studies. Full article
Show Figures

Figure 1

17 pages, 3622 KB  
Article
Assessment of Cracking in Masonry Structures Based on the Breakage of Ordinary Silica-Core Silica-Clad Optical Fibers
by Sergei Khotiaintsev and Volodymyr Timofeyev
Appl. Sci. 2022, 12(14), 6885; https://doi.org/10.3390/app12146885 - 7 Jul 2022
Cited by 4 | Viewed by 2750
Abstract
This paper presents a study on the suitability and accuracy of detecting structural cracks in brick masonry by exploiting the breakage of ordinary silica optical fibers bonded to its surface with an epoxy adhesive. The deformations and cracking of the masonry specimen, and [...] Read more.
This paper presents a study on the suitability and accuracy of detecting structural cracks in brick masonry by exploiting the breakage of ordinary silica optical fibers bonded to its surface with an epoxy adhesive. The deformations and cracking of the masonry specimen, and the behavior of pilot optical signals transmitted through the fibers upon loading of the test specimen were observed. For the first time, reliable detection of structural cracks with a given minimum value was achieved, despite the random nature of the ultimate strength of the optical fibers. This was achieved using arrays of several optical fibers placed on the structural element. The detection of such cracks allows the degree of structural danger of buildings affected by earthquake or other destructive phenomena to be determined. The implementation of this technique is simple and cost effective. For this reason, it may have a broad application in permanent damage-detection systems in buildings in seismic zones. It may also find application in automatic systems for the detection of structural damage to the load-bearing elements of land vehicles, aircraft, and ships. Full article
Show Figures

Figure 1

16 pages, 41564 KB  
Article
Machine-Learning Approach for Automatic Detection of Wild Beluga Whales from Hand-Held Camera Pictures
by Voncarlos M. Araújo, Ankita Shukla, Clément Chion, Sébastien Gambs and Robert Michaud
Sensors 2022, 22(11), 4107; https://doi.org/10.3390/s22114107 - 28 May 2022
Cited by 8 | Viewed by 5156
Abstract
A key aspect of ocean protection consists in estimating the abundance of marine mammal population density within their habitat, which is usually accomplished using visual inspection and cameras from line-transect ships, small boats, and aircraft. However, marine mammal observation through vessel surveys requires [...] Read more.
A key aspect of ocean protection consists in estimating the abundance of marine mammal population density within their habitat, which is usually accomplished using visual inspection and cameras from line-transect ships, small boats, and aircraft. However, marine mammal observation through vessel surveys requires significant workforce resources, including for the post-processing of pictures, and is further challenged due to animal bodies being partially hidden underwater, small-scale object size, occlusion among objects, and distracter objects (e.g., waves, sun glare, etc.). To relieve the human expert’s workload while improving the observation accuracy, we propose a novel system for automating the detection of beluga whales (Delphinapterus leucas) in the wild from pictures. Our system relies on a dataset named Beluga-5k, containing more than 5.5 thousand pictures of belugas. First, to improve the dataset’s annotation, we have designed a semi-manual strategy for annotating candidates in images with single (i.e., one beluga) and multiple (i.e., two or more belugas) candidate subjects efficiently. Second, we have studied the performance of three off-the-shelf object-detection algorithms, namely, Mask-RCNN, SSD, and YOLO v3-Tiny, on the Beluga-5k dataset. Afterward, we have set YOLO v3-Tiny as the detector, integrating single- and multiple-individual images into the model training. Our fine-tuned CNN-backbone detector trained with semi-manual annotations is able to detect belugas despite the presence of distracter objects with high accuracy (i.e., 97.05 mAP@0.5). Finally, our proposed method is able to detect overlapped/occluded multiple individuals in images (beluga whales that swim in groups). For instance, it is able to detect 688 out of 706 belugas encountered in 200 multiple images, achieving 98.29% precision and 99.14% recall. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

Back to TopTop