Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Authors = Paheding Sidike ORCID = 0000-0003-4712-9672

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7188 KiB  
Article
Understanding the Influence of Image Enhancement on Underwater Object Detection: A Quantitative and Qualitative Study
by Ashraf Saleem, Ali Awad, Sidike Paheding, Evan Lucas, Timothy C. Havens and Peter C. Esselman
Remote Sens. 2025, 17(2), 185; https://doi.org/10.3390/rs17020185 - 7 Jan 2025
Cited by 3 | Viewed by 1908
Abstract
Underwater image enhancement is often perceived as a disadvantageous process to object detection. We propose a novel analysis of the interactions between enhancement and detection, elaborating on the potential of enhancement to improve detection. In particular, we evaluate object detection performance for each [...] Read more.
Underwater image enhancement is often perceived as a disadvantageous process to object detection. We propose a novel analysis of the interactions between enhancement and detection, elaborating on the potential of enhancement to improve detection. In particular, we evaluate object detection performance for each individual image rather than across the entire set to allow a direct performance comparison of each image before and after enhancement. This approach enables the generation of unique queries to identify the outperforming and underperforming enhanced images compared to the original images. To accomplish this, we first produce enhanced image sets of the original images using recent image enhancement models. Each enhanced set is then divided into two groups: (1) images that outperform or match the performance of the original images and (2) images that underperform. Subsequently, we create mixed original-enhanced sets by replacing underperforming enhanced images with their corresponding original images. Next, we conduct a detailed analysis by evaluating all generated groups for quality and detection performance attributes. Finally, we perform an overlap analysis between the generated enhanced sets to identify cases where the enhanced images of different enhancement algorithms unanimously outperform, equally perform, or underperform the original images. Our analysis reveals that, when evaluated individually, most enhanced images achieve equal or superior performance compared to their original counterparts. The proposed method uncovers variations in detection performance that are not apparent in a whole set as opposed to a per-image evaluation because the latter reveals that only a small percentage of enhanced images cause an overall negative impact on detection. We also find that over-enhancement may lead to deteriorated object detection performance. Lastly, we note that enhanced images reveal hidden objects that were not annotated due to the low visibility of the original images. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

19 pages, 4171 KiB  
Article
Deep-Learning-Incorporated Augmented Reality Application for Engineering Lab Training
by John Estrada, Sidike Paheding, Xiaoli Yang and Quamar Niyaz
Appl. Sci. 2022, 12(10), 5159; https://doi.org/10.3390/app12105159 - 20 May 2022
Cited by 26 | Viewed by 9463
Abstract
Deep learning (DL) algorithms have achieved significantly high performance in object detection tasks. At the same time, augmented reality (AR) techniques are transforming the ways that we work and connect with people. With the increasing popularity of online and hybrid learning, we propose [...] Read more.
Deep learning (DL) algorithms have achieved significantly high performance in object detection tasks. At the same time, augmented reality (AR) techniques are transforming the ways that we work and connect with people. With the increasing popularity of online and hybrid learning, we propose a new framework for improving students’ learning experiences with electrical engineering lab equipment by incorporating the abovementioned technologies. The DL powered automatic object detection component integrated into the AR application is designed to recognize equipment such as multimeter, oscilloscope, wave generator, and power supply. A deep neural network model, namely MobileNet-SSD v2, is implemented for equipment detection using TensorFlow’s object detection API. When a piece of equipment is detected, the corresponding AR-based tutorial will be displayed on the screen. The mean average precision (mAP) of the developed equipment detection model is 81.4%, while the average recall of the model is 85.3%. Furthermore, to demonstrate practical application of the proposed framework, we develop a multimeter tutorial where virtual models are superimposed on real multimeters. The tutorial includes images and web links as well to help users learn more effectively. The Unity3D game engine is used as the primary development tool for this tutorial to integrate DL and AR frameworks and create immersive scenarios. The proposed framework can be a useful foundation for AR and machine-learning-based frameworks for industrial and educational training. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence, Deep Neural Networks)
Show Figures

Figure 1

30 pages, 12201 KiB  
Article
Leveraging Very-High Spatial Resolution Hyperspectral and Thermal UAV Imageries for Characterizing Diurnal Indicators of Grapevine Physiology
by Matthew Maimaitiyiming, Vasit Sagan, Paheding Sidike, Maitiniyazi Maimaitijiang, Allison J. Miller and Misha Kwasniewski
Remote Sens. 2020, 12(19), 3216; https://doi.org/10.3390/rs12193216 - 2 Oct 2020
Cited by 28 | Viewed by 7721
Abstract
Efficient and accurate methods to monitor crop physiological responses help growers better understand crop physiology and improve crop productivity. In recent years, developments in unmanned aerial vehicles (UAV) and sensor technology have enabled image acquisition at very-high spectral, spatial, and temporal resolutions. However, [...] Read more.
Efficient and accurate methods to monitor crop physiological responses help growers better understand crop physiology and improve crop productivity. In recent years, developments in unmanned aerial vehicles (UAV) and sensor technology have enabled image acquisition at very-high spectral, spatial, and temporal resolutions. However, potential applications and limitations of very-high-resolution (VHR) hyperspectral and thermal UAV imaging for characterization of plant diurnal physiology remain largely unknown, due to issues related to shadow and canopy heterogeneity. In this study, we propose a canopy zone-weighting (CZW) method to leverage the potential of VHR (≤9 cm) hyperspectral and thermal UAV imageries in estimating physiological indicators, such as stomatal conductance (Gs) and steady-state fluorescence (Fs). Diurnal flights and concurrent in-situ measurements were conducted during grapevine growing seasons in 2017 and 2018 in a vineyard in Missouri, USA. We used neural net classifier and the Canny edge detection method to extract pure vine canopy from the hyperspectral and thermal images, respectively. Then, the vine canopy was segmented into three canopy zones (sunlit, nadir, and shaded) using K-means clustering based on the canopy shadow fraction and canopy temperature. Common reflectance-based spectral indices, sun-induced chlorophyll fluorescence (SIF), and simplified canopy water stress index (siCWSI) were computed as image retrievals. Using the coefficient of determination (R2) established between the image retrievals from three canopy zones and the in-situ measurements as a weight factor, weighted image retrievals were calculated and their correlation with in-situ measurements was explored. The results showed that the most frequent and the highest correlations were found for Gs and Fs, with CZW-based Photochemical reflectance index (PRI), SIF, and siCWSI (PRICZW, SIFCZW, and siCWSICZW), respectively. When all flights combined for the given field campaign date, PRICZW, SIFCZW, and siCWSICZW significantly improved the relationship with Gs and Fs. The proposed approach takes full advantage of VHR hyperspectral and thermal UAV imageries, and suggests that the CZW method is simple yet effective in estimating Gs and Fs. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing for Precision Agriculture and Viticulture)
Show Figures

Graphical abstract

23 pages, 6914 KiB  
Article
Crop Monitoring Using Satellite/UAV Data Fusion and Machine Learning
by Maitiniyazi Maimaitijiang, Vasit Sagan, Paheding Sidike, Ahmad M. Daloye, Hasanjan Erkbol and Felix B. Fritschi
Remote Sens. 2020, 12(9), 1357; https://doi.org/10.3390/rs12091357 - 25 Apr 2020
Cited by 232 | Viewed by 19892
Abstract
Non-destructive crop monitoring over large areas with high efficiency is of great significance in precision agriculture and plant phenotyping, as well as decision making with regards to grain policy and food security. The goal of this research was to assess the potential of [...] Read more.
Non-destructive crop monitoring over large areas with high efficiency is of great significance in precision agriculture and plant phenotyping, as well as decision making with regards to grain policy and food security. The goal of this research was to assess the potential of combining canopy spectral information with canopy structure features for crop monitoring using satellite/unmanned aerial vehicle (UAV) data fusion and machine learning. Worldview-2/3 satellite data were tasked synchronized with high-resolution RGB image collection using an inexpensive unmanned aerial vehicle (UAV) at a heterogeneous soybean (Glycine max (L.) Merr.) field. Canopy spectral information (i.e., vegetation indices) was extracted from Worldview-2/3 data, and canopy structure information (i.e., canopy height and canopy cover) was derived from UAV RGB imagery. Canopy spectral and structure information and their combination were used to predict soybean leaf area index (LAI), aboveground biomass (AGB), and leaf nitrogen concentration (N) using partial least squares regression (PLSR), random forest regression (RFR), support vector regression (SVR), and extreme learning regression (ELR) with a newly proposed activation function. The results revealed that: (1) UAV imagery-derived high-resolution and detailed canopy structure features, canopy height, and canopy coverage were significant indicators for crop growth monitoring, (2) integration of satellite imagery-based rich canopy spectral information with UAV-derived canopy structural features using machine learning improved soybean AGB, LAI, and leaf N estimation on using satellite or UAV data alone, (3) adding canopy structure information to spectral features reduced background soil effect and asymptotic saturation issue to some extent and led to better model performance, (4) the ELR model with the newly proposed activated function slightly outperformed PLSR, RFR, and SVR in the prediction of AGB and LAI, while RFR provided the best result for N estimation. This study introduced opportunities and limitations of satellite/UAV data fusion using machine learning in the context of crop monitoring. Full article
Show Figures

Figure 1

23 pages, 2764 KiB  
Article
Dual Activation Function-Based Extreme Learning Machine (ELM) for Estimating Grapevine Berry Yield and Quality
by Matthew Maimaitiyiming, Vasit Sagan, Paheding Sidike and Misha T. Kwasniewski
Remote Sens. 2019, 11(7), 740; https://doi.org/10.3390/rs11070740 - 27 Mar 2019
Cited by 57 | Viewed by 10613
Abstract
Reliable assessment of grapevine productivity is a destructive and time-consuming process. In addition, the mixed effects of grapevine water status and scion-rootstock interactions on grapevine productivity are not always linear. Despite the potential opportunity of applying remote sensing and machine learning techniques to [...] Read more.
Reliable assessment of grapevine productivity is a destructive and time-consuming process. In addition, the mixed effects of grapevine water status and scion-rootstock interactions on grapevine productivity are not always linear. Despite the potential opportunity of applying remote sensing and machine learning techniques to predict plant traits, there are still limitations to previously studied techniques for vine productivity due to the complexity of the system not being adequately modeled. During the 2014 and 2015 growing seasons, hyperspectral reflectance spectra were collected using a handheld spectroradiometer in a vineyard designed to investigate the effects of irrigation level (0%, 50%, and 100%) and rootstocks (1103 Paulsen, 3309 Couderc, SO4 and Chambourcin) on vine productivity. To assess vine productivity, it is necessary to measure factors related to fruit ripeness and not just yield, as an over cropped vine may produce high-yield but poor-quality fruit. Therefore, yield, Total Soluble Solids (TSS), Titratable Acidity (TA) and the ratio TSS/TA (maturation index, IMAD) were measured. A total of 20 vegetation indices were calculated from hyperspectral data and used as input for predictive model calibration. Prediction performance of linear/nonlinear multiple regression methods and Weighted Regularized Extreme Learning Machine (WRELM) were compared with our newly developed WRELM-TanhRe. The developed method is based on two activation functions: hyperbolic tangent (Tanh) and rectified linear unit (ReLU). The results revealed that WRELM and WRELM-TanhRe outperformed the widely used multiple regression methods when model performance was tested with an independent validation dataset. WRELM-TanhRe produced the highest prediction accuracy for all the berry yield and quality parameters (R2 of 0.522–0.682 and RMSE of 2–15%), except for TA, which was predicted best with WRELM (R2 of 0.545 and RMSE of 6%). The results demonstrate the value of combining hyperspectral remote sensing and machine learning methods for improving of berry yield and quality prediction. Full article
(This article belongs to the Special Issue Remote Sensing in Viticulture)
Show Figures

Graphical abstract

23 pages, 3446 KiB  
Article
Urban Tree Species Classification Using a WorldView-2/3 and LiDAR Data Fusion Approach and Deep Learning
by Sean Hartling, Vasit Sagan, Paheding Sidike, Maitiniyazi Maimaitijiang and Joshua Carron
Sensors 2019, 19(6), 1284; https://doi.org/10.3390/s19061284 - 14 Mar 2019
Cited by 190 | Viewed by 12982
Abstract
Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection [...] Read more.
Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Land-Cover and Land-Use Changes)
Show Figures

Figure 1

66 pages, 13905 KiB  
Review
A State-of-the-Art Survey on Deep Learning Theory and Architectures
by Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal and Vijayan K. Asari
Electronics 2019, 8(3), 292; https://doi.org/10.3390/electronics8030292 - 5 Mar 2019
Cited by 1326 | Viewed by 98910
Abstract
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more [...] Read more.
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

29 pages, 17064 KiB  
Article
UAV-Based High Resolution Thermal Imaging for Vegetation Monitoring, and Plant Phenotyping Using ICI 8640 P, FLIR Vue Pro R 640, and thermoMap Cameras
by Vasit Sagan, Maitiniyazi Maimaitijiang, Paheding Sidike, Kevin Eblimit, Kyle T. Peterson, Sean Hartling, Flavio Esposito, Kapil Khanal, Maria Newcomb, Duke Pauli, Rick Ward, Felix Fritschi, Nadia Shakoor and Todd Mockler
Remote Sens. 2019, 11(3), 330; https://doi.org/10.3390/rs11030330 - 7 Feb 2019
Cited by 212 | Viewed by 28732
Abstract
The growing popularity of Unmanned Aerial Vehicles (UAVs) in recent years, along with decreased cost and greater accessibility of both UAVs and thermal imaging sensors, has led to the widespread use of this technology, especially for precision agriculture and plant phenotyping. There are [...] Read more.
The growing popularity of Unmanned Aerial Vehicles (UAVs) in recent years, along with decreased cost and greater accessibility of both UAVs and thermal imaging sensors, has led to the widespread use of this technology, especially for precision agriculture and plant phenotyping. There are several thermal camera systems in the market that are available at a low cost. However, their efficacy and accuracy in various applications has not been tested. In this study, three commercially available UAV thermal cameras, including ICI 8640 P-series (Infrared Cameras Inc., USA), FLIR Vue Pro R 640 (FLIR Systems, USA), and thermoMap (senseFly, Switzerland) have been tested and evaluated for their potential for forest monitoring, vegetation stress detection, and plant phenotyping. Mounted on multi-rotor or fixed wing systems, these cameras were simultaneously flown over different experimental sites located in St. Louis, Missouri (forest environment), Columbia, Missouri (plant stress detection and phenotyping), and Maricopa, Arizona (high throughput phenotyping). Thermal imagery was calibrated using procedures that utilize a blackbody, handheld thermal spot imager, ground thermal targets, emissivity and atmospheric correction. A suite of statistical analyses, including analysis of variance (ANOVA), correlation analysis between camera temperature and plant biophysical and biochemical traits, and heritability were utilized in order to examine the sensitivity and utility of the cameras against selected plant phenotypic traits and in the detection of plant water stress. In addition, in reference to quantitative assessment of image quality from different thermal cameras, a non-reference image quality evaluator, which primarily measures image focus that is based on the spatial relationship of pixels in different scales, was developed. Our results show that (1) UAV-based thermal imaging is a viable tool in precision agriculture and (2) the three examined cameras are comparable in terms of their efficacy for plant phenotyping. Overall, accuracy, when compared against field measured ground temperature and estimating power of plant biophysical and biochemical traits, the ICI 8640 P-series performed better than the other two cameras, followed by FLIR Vue Pro R 640 and thermoMap cameras. Our results demonstrated that all three UAV thermal cameras provide useful temperature data for precision agriculture and plant phenotying, with ICI 8640 P-series presenting the best results among the three systems. Cost wise, FLIR Vue Pro R 640 is more affordable than the other two cameras, providing a less expensive option for a wide range of applications. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing of Land Surface Variables)
Show Figures

Graphical abstract

17 pages, 3710 KiB  
Article
Suspended Sediment Concentration Estimation from Landsat Imagery along the Lower Missouri and Middle Mississippi Rivers Using an Extreme Learning Machine
by Kyle T. Peterson, Vasit Sagan, Paheding Sidike, Amanda L. Cox and Megan Martinez
Remote Sens. 2018, 10(10), 1503; https://doi.org/10.3390/rs10101503 - 20 Sep 2018
Cited by 116 | Viewed by 11369
Abstract
Monitoring and quantifying suspended sediment concentration (SSC) along major fluvial systems such as the Missouri and Mississippi Rivers provide crucial information for biological processes, hydraulic infrastructure, and navigation. Traditional monitoring based on in situ measurements lack the spatial coverage necessary for detailed analysis. [...] Read more.
Monitoring and quantifying suspended sediment concentration (SSC) along major fluvial systems such as the Missouri and Mississippi Rivers provide crucial information for biological processes, hydraulic infrastructure, and navigation. Traditional monitoring based on in situ measurements lack the spatial coverage necessary for detailed analysis. This study developed a method for quantifying SSC based on Landsat imagery and corresponding SSC data obtained from United States Geological Survey monitoring stations from 1982 to present. The presented methodology first uses feature fusion based on canonical correlation analysis to extract pertinent spectral information, and then trains a predictive reflectance–SSC model using a feed-forward neural network (FFNN), a cascade forward neural network (CFNN), and an extreme learning machine (ELM). The trained models are then used to predict SSC along the Missouri–Mississippi River system. Results demonstrated that the ELM-based technique generated R2 > 0.9 for Landsat 4–5, Landsat 7, and Landsat 8 sensors and accurately predicted both relatively high and low SSC displaying little to no overfitting. The ELM model was then applied to Landsat images producing quantitative SSC maps. This study demonstrates the benefit of ELM over traditional modeling methods for the prediction of SSC based on satellite data and its potential to improve sediment transport and monitoring along large fluvial systems. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing of Land Surface Variables)
Show Figures

Graphical abstract

Back to TopTop