Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = momentum gradient descent (MGD)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4056 KiB  
Article
Performance Evaluation of Gradient Descent Optimizers in Estuarine Turbidity Estimation with Multilayer Perceptron and Sentinel-2 Imagery
by Naledzani Ndou and Nolonwabo Nontongana
Hydrology 2024, 11(10), 164; https://doi.org/10.3390/hydrology11100164 - 3 Oct 2024
Cited by 3 | Viewed by 1880
Abstract
Accurate monitoring of estuarine turbidity patterns is important for maintaining aquatic ecological balance and devising informed estuarine management strategies. This study aimed to enhance the prediction of estuarine turbidity patterns by enhancing the performance of the multilayer perceptron (MLP) network through the introduction [...] Read more.
Accurate monitoring of estuarine turbidity patterns is important for maintaining aquatic ecological balance and devising informed estuarine management strategies. This study aimed to enhance the prediction of estuarine turbidity patterns by enhancing the performance of the multilayer perceptron (MLP) network through the introduction of stochastic gradient descent (SGD) and momentum gradient descent (MGD). To achieve this, Sentinel-2 multispectral imagery was used as the base on which spectral radiance properties of estuarine waters were analyzed against field-measured turbidity data. In this case, blue, green, red, red edge, near-infrared and shortwave spectral bands were selected for empirical relationship establishment and model development. Inverse distance weighting (IDW) spatial interpolation was employed to produce raster-based turbidity data of the study area based on field-measured data. The IDW image was subsequently binarized using the bi-level thresholding technique to produce a Boolean image. Prior to empirical model development, the selected spectral bands were calibrated to turbidity using multilayer perceptron neural network trained with the sigmoid activation function with stochastic gradient descent (SGD) optimizer and then with sigmoid activation function with momentum gradient descent optimizer. The Boolean image produced from IDW interpolation was used as the base on which the sigmoid activation function calibrated image pixels to turbidity. Empirical models were developed using selected uncalibrated and calibrated spectral bands. The results from all the selected models generally revealed a stronger relationship of the red spectral channel with measured turbidity than with other selected spectral bands. Among these models, the MLP trained with MGD produced a coefficient of determination (r2) value of 0.92 on the red spectral band, followed by the MLP with MGD on the green spectral band and SGD on the red spectral band, with r2 values of 0.75 and 0.72, respectively. The relative error of mean (REM) and r2 results revealed accurate turbidity prediction by the sigmoid with MGD compared to other models. Overall, this study demonstrated the prospect of deploying ensemble techniques on Sentinel-2 multispectral bands in spatially constructing missing estuarine turbidity data. Full article
(This article belongs to the Section Marine Environment and Hydrology Interactions)
Show Figures

Figure 1

24 pages, 800 KiB  
Article
Clustered Federated Learning Based on Momentum Gradient Descent for Heterogeneous Data
by Xiaoyi Zhao, Ping Xie, Ling Xing, Gaoyuan Zhang and Huahong Ma
Electronics 2023, 12(9), 1972; https://doi.org/10.3390/electronics12091972 - 24 Apr 2023
Cited by 7 | Viewed by 2371
Abstract
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient [...] Read more.
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient descent method, which leads to poor convergence performance. To accelerate the convergence rate, this paper proposes clustered federated learning based on momentum gradient descent (CFL-MGD) by integrating momentum and cluster techniques. In CFL-MGD, scattered clients are partitioned into the same cluster when they have the same learning tasks. Meanwhile, each client in the same cluster utilizes their own private data to update local model parameters through the momentum gradient descent. Moreover, we present gradient averaging and model averaging for global aggregation, respectively. To understand the proposed algorithm, we also prove that CFL-MGD converges at an exponential rate for smooth and strongly convex loss functions. Finally, we validate the effectiveness of CFL-MGD on CIFAR-10 and MNIST datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop