Next Article in Journal
Efficacy of Oxygen Fluid (blue®m) on Human Gingival Fibroblast Viability, Proliferation and Inflammatory Cytokine Expression: An In Vitro Study
Next Article in Special Issue
Discrete Element Simulation Parameter Calibration of Wheat Straw Feed Using Response Surface Methodology and Particle Swarm Optimization–Backpropagation Hybrid Algorithm
Previous Article in Journal
Effect of Tai Chi Practice on the Adaptation to Sensory and Motor Perturbations While Standing in Older Adults
Previous Article in Special Issue
Research on Leaf Area Density Detection in Orchard Canopy Using LiDAR Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Residual Attention Network with Atrous Spatial Pyramid Pooling for Soil Element Estimation in LUCAS Hyperspectral Data

1
Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin 541006, China
2
College of Computer Science and Engineering, Guilin University of Technology, Guilin 541006, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7457; https://doi.org/10.3390/app15137457
Submission received: 9 June 2025 / Revised: 25 June 2025 / Accepted: 27 June 2025 / Published: 3 July 2025

Abstract

Visible and near-infrared (Vis–NIR) spectroscopy enables the rapid prediction of soil properties but faces three limitations with conventional machine learning: information loss and overfitting from high-dimensional spectral features; inadequate modeling of nonlinear soil–spectra relationships; and failure to integrate multi-scale spatial features. To address these challenges, we propose ReSE-AP Net, a multi-scale attention residual network with spatial pyramid pooling. Built on convolutional residual blocks, the model incorporates a squeeze-and-excitation channel attention mechanism to recalibrate feature weights and an atrous spatial pyramid pooling (ASPP) module to extract multi-resolution spectral features. This architecture synergistically represents weak absorption peaks (400–1000 nm) and broad spectral bands (1000–2500 nm), overcoming single-scale modeling limitations. Validation on the LUCAS2009 dataset demonstrated that ReSE-AP Net outperformed conventional machine learning by improving the R2 by 2.8–36.5% and reducing the RMSE by 14.2–69.2%. Compared with existing deep learning methods, it increased the R2 by 0.4–25.5% for clay, silt, sand, organic carbon, calcium carbonate, and phosphorus predictions, and decreased the RMSE by 0.7–39.0%. Our contributions include statistical analysis of LUCAS2009 spectra, identification of conventional method limitations, development of the ReSE-AP Net model, ablation studies, and comprehensive comparisons with alternative approaches.

1. Introduction

Soil, as an essential component of Earth’s ecosystems, serves not only as the foundational medium for agricultural production [1], but also as a critical mediator sustaining biodiversity and facilitating carbon cycling processes [2]. Under the dual pressures of climate change and human activities, global soil degradation has exhibited an alarming trend [1,3,4,5,6]. This situation highlights the urgent need for precise and efficient soil monitoring technologies to achieve the “Zero Hunger” and “Life on Land” objectives outlined in the United Nations Sustainable Development Goals (SDGs) [7,8,9,10,11,12]. Visible and near-infrared (Vis–NIR) spectroscopy, characterized by its rapid and non-destructive analytical capabilities, has become an integral tool in contemporary soil analysis. Utilizing spectral response information within the wavelength range of 400–2500 nm, Vis–NIR spectroscopy enables the effective assessment of critical soil parameters, such as organic matter and heavy metals, thus offering technological feasibility for large-scale soil surveys [13,14,15,16,17,18].
In terms of methodological frameworks for soil spectral modeling, traditional machine learning techniques have established diversified approaches including partial least squares regression (PLSR), support vector machine regression (SVR), random forest (RF), ridge regression, and gradient boosting trees (XGBoost). Numerous scholars have conducted extensive research in the field of soil hyperspectral inversion. For instance, P Jia et al. utilized the extremely randomized tree (ERT) model to predict soil electrical conductivity in northwestern China [19] while L Jia et al. employed the marine predators algorithm to optimize random forest models for predicting the soil organic matter content [20]. Wu B et al. applied an optimized XGBoost model to retrieve the soil copper content [21], while Zhang M et al. compared linear models (GWR, PLSR) with nonlinear models (RF, SVM) to predict the arsenic concentration in soils from Pingtan Island [22]. Z Gao et al. inverted the total nitrogen content in apple orchard soils during fertilization using hyperspectral data and various machine learning regression methods [23]. Q Song et al. leveraged UAV hyperspectral data and compared PLSR with ensemble learning models for the inversion of soil textures (sand, silt, clay) [24]. Zhou W et al. combined laboratory-based spectral data with random forest and Bayesian data fusion methods to estimate the soil organic carbon in the Three-River Headwater Region [25]. Zhong Q et al. utilized hyperspectral data in conjunction with extreme learning machine (ELM) and support vector machine (SVM) for urban soil nickel concentration inversion [26]. Subi X et al. developed hyperspectral models for soil organic matter (SOM) in arid regions of northwest China, comparing multiple linear regression and machine learning approaches [27]. Chen S et al. applied continuous wavelet transform (CWT) coupled with extreme learning machine (ELM) for the rapid inversion of soil moisture content [28]. While the above studies have demonstrated considerable success in soil hyperspectral inversion using machine learning, three major issues remain. (1) Existing machine learning models often rely on principal component analysis (PCA) or manual band selection for dimensionality reduction when dealing with large-scale datasets such as the LUCAS2009. However, linear dimensionality reduction methods compromise spectral continuity (e.g., absorption peak shapes and adjacent-band relationships), leading to diminished sensitivity to subtle spectral signals (e.g., heavy metal feature peaks). (2) Even when utilizing nonlinear models (e.g., RF, XGBoost), their tree-based feature splitting mechanisms essentially represent piecewise linear approximations, thus failing to adequately characterize complex nonlinear coupling between soil elements and spectral features. (3) Existing methodologies predominantly adopt single-scale modeling, unable to simultaneously capture local spectral details and global trends, thereby resulting in fragmented cross-band relational information.
Within the methodological framework of deep learning, approaches such as convolutional neural networks (CNN) and long short-term memory networks (LSTM) exhibit significant advantages compared with traditional machine learning paradigms. Specifically, deep learning circumvents limitations associated with manual feature engineering by leveraging autonomous feature learning mechanisms, effectively captures complex higher-order response relationships through deep nonlinear mappings, and enhances practical applicability via end-to-end data-driven modeling frameworks. From a theoretical perspective, these methods demonstrate superiority in feature representational capacity (representation learning), efficiency in processing high-dimensional data (dimensional invariance), multi-task generalization (parameter sharing), and robustness to noise (distributed representations), thereby providing a novel paradigm for modeling complex soil–spectral interactions. Empirically, the suitability and advantages of deep architectures have been validated by previous studies: for instance, Sheng Wang et al. [29] employed an LSTM-based framework to capture dependencies within spectral sequences; Wang H et al. [30] proposed a CNN-LSTM hybrid architecture for the joint extraction of spatial and temporal features; and Li H’s group [31] developed a dual-branch CNN architecture effectively integrating heterogeneous features, achieving breakthroughs in various application scenarios.
Addressing the core issues inherent in traditional methods—specifically the difficulties in processing high-dimensional data, insufficient nonlinear representation, and multi-scale fragmentation—this study introduces a novel deep-learning-based model termed as the multi-scale attention residual network (ReSE-AP Net). Building upon the residual convolutional neural network (ResNet) structure, the proposed model incorporates innovations across multiple dimensions. (1) Channel attention mechanism: By embedding a squeeze-and-excitation (SE) module, global average pooling is employed to capture statistical channel-wise feature responses, and a two-layer fully connected network dynamically calibrates feature channels, significantly enhancing the representation of critical spectral regions such as heavy-metal-sensitive bands. (2) Multi-scale feature pyramid construction: An atrous spatial pyramid pooling (ASPP) module based on dilated convolutions is designed to simultaneously capture the local spectral details and global spectral trends through parallel convolutional branches with varying receptive fields (dilation rates of 1, 2, and 4). (3) Hierarchical feature fusion: Employing residual skip-connections to facilitate cross-layer information interaction, local textural features from shallow layers (e.g., baseline reflectance fluctuations) and abstracted nonlinear spectral combinations from deep layers are integrated, creating a multi-granularity feature representation system.
To validate the effectiveness of the proposed model, rigorous comparative experiments were conducted using the LUCAS2009 benchmark dataset. The experiments included traditional machine learning models (PLSR, Ridge, SVR, RF, XGBoost) and mainstream deep learning models (VGG, ResNet, temporal convolutional network (TCN), Transformer). Evaluation metrics employed were the coefficient of determination (R2) and root mean square error (RMSE). Results indicated that the ReSE-AP Net model significantly outperformed traditional machine learning methods across all elements, achieving improvements of 2.8–36.5% in R2 and reductions of 14.2–69.2% in RMSE. Compared with contemporary deep learning models commonly used in the field, the ReSE-AP Net achieved a superior R2 performance for more than half of the soil elements, improving by approximately 0.2–25.5% while maintaining a comparable performance with the best deep learning models for the remaining elements. Moreover, the proposed model consistently exhibited superior RMSE performance, outperforming all other deep learning models except for matching the TCN performance on pH (H2O), demonstrating improvements of approximately 0.7–39.0%, thus confirming its excellent predictive accuracy and generalization capability.

2. Materials and Methods

2.1. Data Sets and Data Processing

The LUCAS 2009 dataset [32], a flagship outcome of the EU-led Land Use/Cover Area frame Survey, is recognized as one of the most representative continental-scale environmental databases in Europe due to its stringent soil sampling and analytical standards. Implementing a systematic 2 km × 2 km grid design, the survey spans the 25 EU Member States and adjacent regions, encompassing approximately 19,000 locations where topsoil (0–20 cm) was sampled with fine granularity. At each site, composite sampling was rigorously applied: five sub-samples were collected in a cross pattern within a 2 m radius centered on the geo-referenced point and subsequently pooled to form a 0.5 kg topsoil sample, thereby objectively representing the soil properties of roughly 4 m2 of land [33]. The sampling network covers diverse land-use categories, including arable land, grassland, and forest, with a particularly high proportion of agricultural sites.
During laboratory analysis, each soil sample underwent standardized pre-treatments, including air-drying, homogenization, and quality control, before being systematically characterized for fifteen core parameters: texture fractions (clay, silt, sand), chemical properties (pH, organic carbon, carbonates, total nitrogen, available phosphorus, and potassium), physical structure (coarse fragment content), and functional attributes (cation exchange capacity). Multispectral reflectance spectra were additionally acquired for a subset of samples, providing multi-dimensional inputs for subsequent soil-health assessments and carbon-stock modeling. Although the dataset’s sampling density (one point per 4 km2) and its emphasis on agricultural land impose constraints on fine-scale ecological studies and analyses of non-agricultural soils, its rigorous stratified sampling scheme, harmonized analytical protocols, and open-access policy render it an indispensable benchmark for evaluating EU agricultural policies, investigating the soil-degradation–climate interactions, and validating remote-sensing inversion models. To date, it continues to play an irreplaceable role in environmental science, agricultural management, and carbon-cycle research.
The rigorous hierarchical sampling framework, standardized analytical methods, and open-access nature of the LUCAS dataset provide a robust foundation for this study. This methodological integrity ensures the reliability of our model’s performance evaluation while minimizing the propagation of errors originating from data inaccuracies. A critical consideration, however, is the dataset’s pronounced skew toward agricultural land cover. Accordingly, the generalizability of our findings to extensive non-agricultural ecosystems warrants further investigation, representing a clear and compelling avenue for subsequent research efforts.

2.1.1. Dataset Statistics and Division

The LUCAS dataset employed in this study comprised 19,036 samples, but individual feature columns exhibited varying degrees of missingness. To mitigate the impact of missing data on model training without incurring excessive data loss, the following strategy was adopted: for any target variable under prediction, only those rows with missing values in that specific column were removed. This approach balances sample size with data integrity, thereby enhancing model robustness and generalization. The remaining data were partitioned into a training set (66.6%) and an independent test set (33.3%), with fivefold cross-validation applied to the training set and 20% of the training samples withheld as a validation subset.
Descriptive statistics were computed for each split, including the size (number of observations), mean (arithmetic average, reflecting central tendency), std (standard deviation, measuring dispersion), median (middle value, an alternative measure of central tendency), mode (most frequent value), kurtosis (peakedness, indicating tail heaviness), and Iqr (inter-quartile range, a robust measure of spread). The results are summarized in Table 1.
From a statistical perspective, nutrient-related variables such as organic carbon (OC), CaCO3, total nitrogen (N), phosphorus (P), and potassium (K) display pronounced right-skewness; P and K additionally exhibit leptokurtic, heavy-tailed distributions, implying a prevalence of low values interspersed with a few extreme highs. High coefficients of variation for cation-exchange capacity (CEC) and K highlight marked spatial heterogeneity in soil fertility.
With respect to soil texture, the mean fractions of clay (18.88%), silt (38.23%), and sand (42.88%) indicate that the study region is dominated by sandy loam. The close agreement between the median (37%) and mean for silt suggests an approximately symmetric distribution, whereas the wide Iqr for sand denotes substantial variability in sand content. The large divergence between the mean (49.92) and median (20.80) of OC—together with a kurtosis of 13.53—revealed a mixture of high-organic soils and typical arable soils. Collectively, the dataset captures the pronounced heterogeneity of European soils, posing a non-trivial challenge for predictive modeling. Nevertheless, the training and test sets exhibited strong concordance in key statistics (mean, standard deviation, median), and apart from a minor discrepancy in the Iqr of K, all other parameters maintained stable Iqr values across splits. This indicates a sound data partitioning strategy with no evidence of significant data leakage, thus providing a solid foundation for subsequent model training and evaluation.

2.1.2. Data Preprocessing

For data preprocessing, this study employed piecewise pooling averaging (PPA), also known as the bin-averaging method. PPA is a dimensionality-reduction technique that applies local mean pooling to high-dimensional spectral data: the spectrum is partitioned into fixed intervals, and the mean of each interval is computed to generate a compressed feature set. This procedure preserves global trend information while effectively suppressing random noise and lowering computational cost. Given that Vis–NIR spectra typically comprise thousands of bands, PPA was used to condense the original 4200-dimensional spectral vectors to 128 dimensions, substantially improving both computational efficiency and training speed without sacrificing predictive accuracy. Let the original spectral matrix be A x R m × n , where m is the sample and n is the feature dimension. Then, the width of the compartments can be obtained by Formula (1):
bin _ size = n b
The unpacking operation of PPA can be expressed as Formula (2):
A x = 1 bin _ size k = 1 bin _ size A x [ : , ( i 1 ) · bin _ size + k ]
Among them, i { 1 , 2 , , b } and the final output is A x R m × b .

2.2. Modeling Method

This study proposes a multi-scale attention residual network (ReSE-AP Net) that synergistically integrates residual architecture, channel attention, and multi-scale feature fusion to efficiently decode complex spectral information. Centered on residual convolutions, the network employs skip connections to merge local spectral details with global abstract features, thereby alleviating gradient-vanishing issues. Within each residual block, a squeeze-and-excitation (SE) attention mechanism dynamically enhances responses at critical spectral bands through global feature statistics while suppressing noise. The model further incorporates atrous spatial pyramid pooling (ASPP) to extract spectral features at multiple scales in parallel, simultaneously capturing fine structures of weak absorption peaks and overarching trends of broad spectral ranges. Ultimately, feature fusion followed by nonlinear mapping enables end-to-end prediction, furnishing a robust deep-learning framework for spectral analysis.

2.2.1. Overall Model Structure

The overall architecture of the model is depicted in Figure 1. Training data were first partitioned with a batch size of 320, and piecewise pooling averaging (PPA) was employed to compress the 4200-dimensional spectra to 128 dimensions. After preprocessing, the input tensor had a shape [320,1,128] corresponding to [batch_size,channel,seq_length].
An initial convolutional module was placed at the network front end to extract low-level features; this consists of a convolutional layer (kernel size = 3, padding = 1) followed by a ReLU activation function, thereby introducing nonlinearity. The resulting features are forwarded to a residual network augmented with a squeeze-and-excitation (SE) channel-attention mechanism. This residual network contains two residual blocks, each comprising two convolutional layers—the first expanding the channel dimension and the second maintaining it—together with batch normalization and ReLU activation. Within the main branch of each block, the features produced by the two convolutions are re-weighted by the SE attention to dynamically enhance informative spectral bands and suppress noise. The attention-refined output is then added to the shortcut pathway, forming the residual connection. The shortcut both mitigates gradient vanishing and network degradation and enables lower-level information to flow directly to deeper layers, fostering feature reuse and preventing information loss.
The output of the residual network is subsequently fed into an atrous spatial pyramid pooling (ASPP) module. ASPP comprises three parallel atrous-convolution branches with dilation rates of 1, 2, and 4, respectively, to capture multi-scale features with varying receptive fields. Concurrently, a global-average-pooling branch compresses the sequence dimension to obtain global statistics, which are then restored to the original sequence length via nearest-neighbor upsampling to align with the atrous branches. Features from the atrous convolutions and the global branch are merged in a fusion layer, yielding a composite representation that integrates local, intermediate, large-scale, and global information.
The fused features are further downsampled by a max-pooling layer (kernel size = 2, stride = 2), flattened, and passed through a fully connected layer to produce the final output, enabling end-to-end mapping from spectra to soil-element predictions.

2.2.2. SE Attention Mechanism and Residual Convolutional Network

The squeeze-and-excitation (SE) attention mechanism constitutes a canonical form of channel attention, designed to augment the representational capacity of convolutional neural networks while reducing the training overhead. It comprises two principal operations—squeeze and excitation. In the squeeze phase, global average pooling is applied to each channel feature map, collapsing its spatial dimensions to generate a channel descriptor that encapsulates the channel’s global response. This operation is formalized in Equation (3):
z c = 1 N i = 1 N x bic
where x b i c represents the value of channel c at point i in the BTH batch of the input feature position, N is the total number of elements on this channel, and z c is the value of channel c after compression. During the excitation phase, the squeezed descriptors are passed through a nonlinear transformation to produce a weight vector whose length equals the number of channels, with each element quantifying the importance of its corresponding channel. This operation can be formulated as Equation (4):
s = σ ReLU ( W 2 σ Sigmod ( W 1 z + b 1 ) + b 2 )
Among them, z is the compressed feature vector, z = [ z 1 , z 2 , …… , z c ] , W 1 , W 2 are the weight parameter of the fully connected layer, b 1 , b 2 are the bias term, and σ is the activation function. Subsequently, the channel-wise weights generated in the excitation phase are applied to the original feature maps via channel-specific multiplication, thereby re-calibrating the features. This operation can be represented by Equation (5):
X c a l e d = s · x
Among them, x is the original feature and X c a l e d is the feature after channel weighting. The squeeze-and-excitation (SE) attention mechanism recalibrates each channel feature map in a convolutional neural network through two successive operations—squeeze and excitation—thereby markedly enhancing the representational capacity. A schematic of the SE module adopted in this study is illustrated in Figure 2, where B denotes the batch size, C is the number of channels, L is the sequence length, and r is the reduction (compression) ratio.
As a paradigmatic deep-network architecture, the residual neural network alleviates the vanishing-gradient problem commonly encountered during the training of very deep models by incorporating residual learning and cross-layer identity mappings, thereby substantially enhancing the feature representation and generalization capabilities. Each residual block in a ResNet can be formulated as Equation (6):
y l = h ( x l ) + F ( x l , { W i } )
Among them, y l represents the output of the l residual block, x l is the input of the l residual block, h ( x l ) is the skip connection, F ( x l , { W i } ) is the residual function, and W i is the weight parameter. In ReSE-AP Net, F ( x l , { W i } ) should be expressed as Equation (7):
F ( x l , { W i } ) = s · ( W 2 · σ ReLU ( W 1 · x l + b 1 ) + b 2 )
Among them, W 1 , W 2 are the weight parameters of the two convolutional layers, b 1 , b 2 are the bias terms, and s is the channel weight vector calculated by the SE module. Finally, the total expression of the residual network weighted by SE attention can be obtained as Equation (8):
y L = x L + i = 1 L F ( x i , { W i } )
Among them, x L is the input of the last residual block, and L is the total number of residual blocks. Experimental results indicate that excessively deep architectures (e.g., ResNet-152) deteriorate performance in the target task rather than improving it. Detailed analysis attributes this degradation to two primary factors: (i) the inherent parameter redundancy of very deep networks results in a mismatch between model complexity and dataset size, thereby inducing severe overfitting; and (ii) over-parameterized models exhibit gradient instability during back-propagation, substantially complicating training. In response, a streamlined shallow residual architecture is proposed. As illustrated in Figure 3 (where C1, C2, and C3 denote different channel dimensions), the network consists of only two residual blocks, striking a judicious balance between model capacity and computational efficiency. Empirical evidence demonstrates that relative to deeper residual networks, this shallow design preserves the feature-extraction capability while markedly reducing complexity, consequently shortening the per-iteration training time and facilitating rapid model updates.
This work innovatively integrates the squeeze-and-excitation (SE) channel-attention mechanism into the residual network for two principal reasons:
Heterogeneous channel importance. Conventional convolutions treat all channels equally, however, their contributions to the target task vary substantially; some channels even convey redundant or noisy information. The SE mechanism adaptively learns channel-specific weights, suppressing less informative channels and amplifying pivotal ones, thereby improving feature utilization.
Explicit modeling of inter-channel dependencies. While residual networks mitigate gradient vanishing via skip connections, they do not explicitly model relationships among channels. By employing nonlinear mapping to capture such dependencies, the SE attention further augments representational power.
The formal mathematical definitions and computational procedures of this module are provided in Equations (9)–(13).
F l = BN ( ReLU ( Conv 1 D ( H l , W l ( 1 ) ) ) )
F l = BN ( ReLU ( Conv 1 D ( F l , W l ( 2 ) ) ) )
S l = ReLU ( W l ( f c 2 ) · Sigmod ( W l ( f c 1 ) · GAP ( F l ) ) )
F l SE = F l S l
H l + 1 = F l SE + ShortCut ( H l )
Among them, BN ( ) , ReLU ( ) , Sigmod ( ) represent batch normalization and two activation functions, respectively; Conv 1 D ( ) represents one-dimensional convolution; W l ( 1 ) , W l ( 2 ) respectively represent the different weight parameters of the two convolution operations; W l ( f c 1 ) , W l ( f c 2 ) are respectively the two weight parameters of the SE attention mechanism in the process of calculating channel weighting; GAP ( ) represents global average pooling (which is the symbolic expression of Formula (3)). S l represents the weight calculated by the channel attention mechanism, and represents the multiplication of each channel. Moreover, incorporating the SE channel-attention mechanism adds only a negligible number of parameters and incurs a minimal computational overhead, imparting an inherently lightweight nature that helps maintain training efficiency. In the proposed design, the SE module is inserted at the end of the main branch of each residual block, immediately before the residual summation; performing channel-wise recalibration prior to feature fusion yields a more discriminative combined representation. Because the shortcut branch primarily serves as an unobstructed gradient-flow pathway to alleviate vanishing gradients, no SE module is applied to this branch. Given that the two residual blocks produce feature maps with different channel dimensions, separate SE modules—each matched to its respective dimensionality—are deployed, thereby ensuring dimensional compatibility and preventing cross-interference among the attention weights.

2.2.3. Pyramid Pooling of Hollow Space

Atrous spatial pyramid pooling (ASPP) is a multi-scale feature-extraction strategy that constructs a pyramid of atrous-convolution branches with distinct dilation rates within a convolutional neural network. By substantially enlarging the effective receptive field without a significant increase in parameters, ASPP enables the network to capture contextual information at multiple spatial scales while preserving feature-map resolution, thereby enhancing its ability to recognize objects of varying sizes. Atrous convolution expands the kernel’s field of view through sparse sampling, circumventing the detail loss usually caused by downsampling, whereas the parallel multi-branch design endows the model with rich scale awareness. The convolution operations corresponding to the different dilation rates are formally defined in Equations (14)–(16).
x 1 = σ ReLU ( W 1 r 1 · x + b 1 )
x 2 = σ ReLU ( W 2 r 2 · x + b 2 )
x 3 = σ ReLU ( W 3 r 3 · x + b 3 )
Among them, W 1 , W 2 , W 3 are the weight parameters of different convolutional layers, b 1 , b 2 , b 3 are the bias terms, r 1 , r 2 , r 3 are the void rates, x is the input feature sequence, and σ ReLU is the ReLU activation function. The global-average-pooling branch compresses the input feature map into a global feature vector, refines the channel dimensionality via a 1 × 1 convolution, and subsequently restores the spatial resolution through upsampling. This process provides global contextual information, thereby compensating for the limited receptive field of local convolutions. The corresponding mathematical formulations are presented in Equations (17)–(19).
x ¯ c = 1 L i = 1 L x b i c
x 4 = σ ReLU ( W 4 · x ¯ + b 4 )
x 4 = L ( 1 ) · x 4 T
Among them, x b i c represents the value of channel c at position i of the Bth batch of the input feature position, L is the total length of the vector, x ¯ is the value of channel c after compression and x ¯ = [ x ¯ 1 , x ¯ 2 , …… , x ¯ c ] , and L ( 1 ) is the column vector of all 1s. Subsequently, the features extracted from the multi-scale atrous-convolution branches and the global-average-pooling branch are concatenated along the channel dimension and fused via a 1 × 1 convolution to realize cross-scale interaction and compression, as formulated in Equations (20) and (21).
x c a t = [ x 1 , x 2 , x 3 , x 4 ]
y o u t = σ ReLU ( W 5 · x c a t + b 5 )
Among them, x o u t represents the result obtained by concatenating the features obtained by the convolution of different receptive fields with the features obtained by global pooling, and y o u t is the cross-scale feature output result after convolution fusion.
In the present task, distinct spectral bands in hyperspectral data corresponded to characteristic absorption features of various substances. The ASPP module, with its parallel multi-branch design, concurrently captures local details (small dilation rate), medium- to long-range dependencies (large dilation rate), and global context (pooling branch). The fused multi-scale features enhance the model’s robustness to spectral noise and local occlusions, rendering ASPP particularly well-suited to the high spectral dimensionality of hyperspectral data. The ASPP architecture implemented in this study is illustrated in Figure 4, where B denotes the batch size, C is the number of channels, and L is the sequence length. Within the overall framework, the ResNet backbone extracts deep representations via residual connections but may overlook cross-scale contextual information; the ASPP module refines these high-level features at multiple scales. Simultaneously, the SE module in the residual network focuses on channel-wise importance, whereas ASPP emphasizes spatial multi-scale information. Their combination realizes “channel-spatial” dual-attention, markedly enhancing the expression of salient features. The complete mathematical formulation of this module is provided in Equations (22)–(26).
C 1 = ReLU ( Conv 1 D ( H , W 1 ) )
C 2 = ReLU ( Conv 1 D ( H , W 2 ) )
C 4 = ReLU ( Conv 1 D ( H , W 4 ) )
G = Upsample ( GAP ( H ) )
F fusion = Concat ( C 1 , C 2 , C 4 , G )
Among them, C 1 , C 2 , C 4 represent the convolution outputs of three different void rates, H represents the input received by ASPP, W 1 , W 2 , W 4 represent the weight parameters of the three different convolutions, Upsample ( ) represents upsampling, Conv 1 D ( ) represents one-dimensional convolution, GAP ( ) represents global average pooling, and Concat ( ) represents the concatenation and fusion of C 1 , C 2 , C 3 , C 4 , G .

2.2.4. Model Evaluation

In this study, model performance was assessed using the coefficient of determination (R2) and the root mean square error (RMSE). The coefficient of determination quantifies the degree of correspondence between the predicted and observed values, representing the proportion of variance in the response variable that is accounted for by the predictive model; an R2 value approaching 1 indicates a superior goodness of fit. RMSE measures the average discrepancy between the predicted and observed values, thereby reflecting the overall predictive accuracy; a lower RMSE denotes reduced error and more precise predictions. The mathematical formulations of R2 and RMSE are provided in Equations (27) and (28), respectively.
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
RMSE = 1 n i = 1 n ( y i y ^ i ) 2

2.2.5. Experimental Setup

The experiments were conducted on a system equipped with a 14th-generation Intel Core i7-14700HX processor (20 cores/28 threads), Intel, Santa Clara, CA, USA and an NVIDIA GeForce RTX 4060 GPU, NVIDIA, Santa Clara, CA, USA. The software environment comprised Windows 11 as the operating system, Python 3.11 as the programming language, and PyTorch 2.3.0 as the deep-learning framework. Based on the above configuration, each round of training during the model training process took approximately 40 s and occupied about 14 GB of memory.

3. Experimental Result

3.1. Ablation Experiment

The study conducted a systematic evaluation of individual model components, namely the residual block alone (ResBlock), the residual network with channel attention (ResNet + SE), and the complete multi-scale attention residual network (ReSE-AP Net). The performance outcomes are summarized in Table 2 (comparison of R2) and Table 3 (comparison of RMSE). The data revealed that augmenting the baseline ResBlock with an SE attention mechanism yielded an average increase of 3.3% in R2 and a 7.5% reduction in RMSE, indicating that channel-wise recalibration markedly improves the feature discriminability. Incorporating ASPP on top of the ResNet + SE architecture further enhanced the average R2 by 2.1% and reduced the RMSE by an additional 6.0%. Relative to the standalone ResBlock, the full ReSE-AP Net achieved a 5.2% average gain in R2 and a 13.1% decrease in RMSE. These findings substantiate the efficacy of parallel multi-receptive-field extraction in strengthening cross-scale feature representation.

3.2. Comparative Experiment

To further assess the practical predictive capability of ReSE-AP Net, a series of benchmark models were selected for comparison including conventional machine learning algorithms (PLSR, Ridge, SVR, RF, XGBoost) and commonly used deep-learning architectures in this domain (VGG-11, TCN, ResNet-152, Transformer). All models were trained on the LUCAS 2009 dataset under identical preprocessing procedures to ensure experimental fairness. The training hyper-parameters were uniformly set as follows: 3000 epochs, a learning rate of 0.0003175, a batch size of 320, the Adam optimizer, and mean squared error (MSE) loss. The resulting performance on the independent test set is summarized in Table 4 (comparison of R2) and Table 5 (comparison of RMSE).
As shown in Table 4 and Table 5, partial least squares regression (PLSR) delivered the best overall performance among the conventional machine learning models, followed by support vector regression (SVR). Within the deep-learning cohort, the Transformer architecture achieved the highest aggregate accuracy across elemental predictions. In contrast, ReSE-AP Net outperformed all machine learning models for every element: for OC, its R2 exceeded that of the best machine learning performer (SVR) by roughly 2.8% and surpassed the poorest performer (random forest, RF) by about 9.2%. For the most divergent indicator, K, ReSE-AP Net improved upon PLSR and RF by approximately 18.63% and 41.79%, respectively. Compared with deep-learning baselines, ReSE-AP Net surpassed the Transformer model on half of the evaluated metrics (clay, silt, sand, OC, CaCO3, and P) and matched it on the remaining metrics, underscoring the proposed network’s superior capability for multi-element hyperspectral prediction.
With respect to the RMSE metric, PLSR again delivered the lowest prediction errors among the traditional machine learning models, followed by SVR. Within the deep learning cohort, the Transformer architecture achieved the best RMSE on over half of the evaluated indicators, with VGG-11 ranking second. In contrast, ReSE-AP Net consistently surpassed all of the baseline models. For the most closely matched indicator, phosphorus (P), ReSE-AP Net reduced the RMSE by approximately 14.2% relative to the best machine learning performer (PLSR) and by about 25.2% relative to the poorest performer (RF). For calcium carbonate (CaCO3)—the indicator exhibiting the largest discrepancy—ReSE-AP Net lowered the RMSE by roughly 41.9% compared with PLSR and by 69.2% compared with RF. Against the deep learning baselines, ReSE-AP Net outperformed all counterparts on every indicator except for the pH in H2O, where it tied with TCN. For silt—the closest indicator among the deep models—ReSE-AP Net achieved a further 1.6% reduction in RMSE over the best deep model (VGG-11) and a 24.4% reduction relative to the worst (ResNet-152). For the pH in CaCl2, which showed the greatest gap, ReSE-AP Net lowered the RMSE by about 1% compared with TCN and by approximately 39.0% compared with ResNet-152. It is worth noting that on the RMSE indicator, the means of other models, except for ReSE-AP Net, on the 11 elements were respectively: 6.526% (clay), 11.838% (silt), 15.494% (sand), 0.584 (PH in CaCl2), 0.555 (PH in H2O), 24.611 g/kg (OC), 41.107 g/kg (CaCO3) 1.293 g/kg (N), 28.261 mg/kg (P), 190.939 mg/kg (K), and 7.876 cmol(+)/kg (CEC). ReSE-AP Net increased the mean value of each element by approximately 26.9% (clay), 22.2% (silt), 25.7% (sand), 39.5% (PH in CaCl2), 34.6% (PH in H2O), 31.4% (OC), 44.8% (CaCO3), 28.7% (N), 17.9% (P) 21.8% (K), and 21.5% (CEC).
As illustrated in Figure 5 and Figure 6, Figure 5 compares the coefficient of determination (R2) obtained by the competing models for each soil attribute, whereas Figure 6 presents the corresponding root mean square error (RMSE). In Figure 5, the closer a model’s curve was to the upper boundary, the better its predictive performance; conversely, in Figure 6, curves located nearer the lower boundary indicated smaller errors and hence superior accuracy. Within the same plot, greater separations between the two curves signified more pronounced performance disparities. The results revealed that with the exception of the P, K, and CEC indicators, ReSE-AP Net occupied the uppermost—or nearly uppermost—position across all remaining attributes in Figure 5. Likewise, in Figure 6, the ReSE-AP Net curve lay at the very bottom for almost every element, underscoring its outstanding overall predictive accuracy.

4. Further Evaluation and Discussion

To further quantify and evaluate the model’s fitting quality and overall performance, scatter plots of the observed versus predicted values were generated (Figure 7). Each plot contained numerous points and two curves: the red line denoted y = x, representing the ideal scenario in which the predicted values perfectly match the observations, whereas the blue curve corresponded to the regression fit of the model’s predictions. Point density was color-coded, with deeper (reddish) hues indicating higher concentrations of samples. In every plot, the red and blue curves intersected; this intersection marks the point at which the predicted and observed values are equal. When the intersection lay within the region of highest point density, the model exhibited a superior fitting performance and robustness within the principal data distribution. The results show that for most soil-element predictions, the intersection of the curves for ReSE-AP Net fell within the densest region, confirming its strong predictive capability and generalization. Nonetheless, for the pH and sand indicators, the intersection only appeared in relatively dense regions rather than the densest area, suggesting that the model’s performance on these two attributes could be further improved.
It is pertinent to contrast our ReSE-AP Net with recent related works that also leverage ASPP-like structures for hyperspectral data analysis, notably the contributions from Liu et al. [34] and Liu et al. [35].
Liu et al. [34] ingeniously adapted the ResNet-50 architecture for weed detection by replacing its latter stages with an ASPP module. While this design proved effective for their specific task, our preliminary experiments indicated that deeper networks, such as ResNet-152, did not necessarily yield a superior performance in our soil property prediction context, suggesting that the optimal network depth is task-dependent. Furthermore, their model lacked an explicit attention mechanism, which we identified as a key component for refining spectral features. In contrast, ReSE-AP Net is architecturally optimized in two ways: first, it employs a residual network of a deliberately chosen, more moderate depth to prevent overfitting and capture salient features effectively, and second, it integrates the SE channel attention mechanism within the feature extraction backbone, enabling progressive feature refinement and noise suppression.
In another relevant study, Liu et al. [35] proposed RAANet for semantic segmentation, which innovatively incorporates a residual structure within the ASPP module itself and deploys a dense arrangement of attention modules both inside and outside the ASPP. While this approach is novel and effective, its primary focus is on a complex, attention-augmented ASPP, with comparatively less emphasis on the initial deep feature extraction process. This may risk underutilizing the rich information embedded in the original hyperspectral data. ReSE-AP Net adopts a fundamentally different strategy by prioritizing the front-end feature extraction. Our model leverages a synergistic combination of residual connections and SE attention to ensure that features are comprehensively extracted and purified before they are channeled into the multi-scale analysis stage. This strategic divergence underscores the unique architectural philosophy of our approach.
In summary, the novelty of ReSE-AP Net, when benchmarked against these state-of-the-art models, is threefold:
(1)
Endogenous refinement through deeply embedded attention: We pioneered the concept of embedding channel attention within each fundamental building block of the feature extraction backbone. This facilitated a progressive, layer-by-layer purification of spectral features, fundamentally enhancing the quality of the feature maps that are subsequently fed into the multi-scale analysis module.
(2)
We propose a novel two-stage architectural paradigm with a clear division of labor: A front-end network dedicated to feature purification and a back-end module focused on multi-scale fusion. This represents a strategic innovation over existing models that either lack a purification stage or conflate it with multi-scale analysis.
(3)
We successfully adapted and validated the efficacy of the ASPP module, a technique predominantly used in 2D image processing, for the task of one-dimensional hyperspectral inversion. Our results confirm that ASPP is a highly effective tool for capturing multi-scale contextual information within 1D spectral data, thereby establishing its utility for this new domain.

5. Conclusions

This study first elucidated the significance of soil-element prediction and its relevance to sustainable agriculture. The publicly available LUCAS 2009 soil dataset was then introduced, outliers were removed, and a suite of descriptive statistics—size, mean, std, median, mode, kurtosis, and IQR—were computed and interpreted to demonstrate the scientific soundness of the data partitioning strategy. After data cleansing, piecewise pooling averaging (PPA) was applied to reduce the dimensionality of the spectral inputs. Building on these preparations, a multi-scale attention residual network based on spatial pyramid pooling (ReSE-AP Net) was proposed and employed for visible–near-infrared (Vis–NIR) hyperspectral inversion of multiple soil elements on the LUCAS 2009 dataset. The model extracts initial features via a front-end convolutional layer, propagates salient information through residual blocks augmented with SE channel attention, and enhances predictive accuracy and robustness by leveraging multi-scale feature extraction and fusion within the ASPP module. Experimental results showed that ReSE-AP Net outperformed all mainstream traditional machine learning models and equaled or surpassed widely used deep learning architectures, with particularly strong performance in terms of RMSE; its success on a publicly available dataset further attests to its generalization capability.
Despite the demonstrated robustness and high performance of the proposed ReSE-AP Net, we acknowledge several limitations that warrant discussion and outline clear avenues for future research.
(1)
The reliance on the LUCAS 2009 dataset, while ensuring high data quality and standardization, introduced a potential bias. The dataset is geographically confined to Europe and is predominantly composed of agricultural soils. Consequently, the model’s generalizability to other geographical regions, diverse land-use types (e.g., forests, wetlands), or less standardized, private datasets remains an open question requiring empirical validation. Future work will therefore focus on acquiring and testing the model on such heterogeneous datasets to rigorously assess its real-world applicability.
(2)
A nuanced analysis of the performance metrics revealed a noteworthy finding. Although ReSE-AP Net surpassed all baseline models in terms of RMSE across all soil properties, its performance on the R2 metric for pH, N, and K was merely on par with the Transformer architecture. We hypothesize two complementary reasons for this observation. One pertains to the inherent inductive biases of the models: the Transformer’s self-attention mechanism may be more adept at capturing the global, long-range spectral dependencies upon which the prediction of these particular elements relies, whereas our CNN-based model excels at leveraging local features. The other reason, suggested by the superior RMSE of our model, is that ReSE-AP Net achieves exceptional accuracy on the majority of samples within the central data distribution but may be less effective than the Transformer at fitting the extreme values that heavily influence the R2 score. This indicates a clear opportunity for refinement.
To address this, our immediate future work will concentrate on enhancing the model architecture. A primary strategy will be to introduce an adaptive weighting mechanism within the ASPP module. This mechanism will be designed to dynamically assign weights during the fusion of multi-scale convolutional features, thereby amplifying salient feature information while suppressing irrelevant noise. In principle, such a modification should augment the feature fusion capability of the ASPP module, leading to an overall improvement in the model’s predictive power, especially in capturing the full variance of the data. This promising direction is currently under active investigation.
In conclusion, while acknowledging these areas for further improvement, the ReSE-AP Net model, as presented, demonstrates strong predictive capabilities for a wide range of soil elements, offering a valuable and high-performance benchmark for the field of soil spectroscopy.

Author Contributions

Conceptualization, Y.D. and Y.C.; Methodology, Y.C.; Software, Y.C.; Validation, S.C., Y.C. and Y.D.; Formal analysis, X.C.; Investigation, Y.C.; Resources, Y.D.; Data curation, Y.D.; Writing—original draft preparation, Y.C.; Writing—review and editing, Y.D.; Supervision, X.C.; Project administration, Y.D.; Funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Guangxi Key Research and Development Pro-gram (GuikeAB24010338, GuikeAB25069340), the National Natural Science Foundation of China (32360374), and the Innovation Project of Guangxi Graduate Education (YCSW2025405).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available in the following links to access public dataset LUCAS: https://esdac.jrc.ec.europa.eu/projects/lucas (accessed on 1 March 2025).

Acknowledgments

We would like to express our gratitude to all of the researchers who participated in the experiment for their efforts. Meanwhile, we also wish to thank the institutions that provided us with financial assistance. At the same time, we declare that we have not used any artificial intelligence tools to manipulate and generate any experimental data and results.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gomiero, T. Soil degradation, land scarcity and food security: Reviewing a complex challenge. Sustainability 2016, 8, 281. [Google Scholar] [CrossRef]
  2. Sun, W.; Huang, Y.; Zhang, W.; Yongqiang, Y. Carbon sequestration and its potential in agricultural soils of China. Glob. Biogeochem. Cycles 2010, 24. [Google Scholar] [CrossRef]
  3. Mandal, D.; Roy, T. Climate Change Impact on Soil Erosion and Land Degradation. In Climate Change Impacts on Soil-Plant-Atmosphere Continuum; Springer Nature: Singapore, 2024; pp. 139–161. [Google Scholar]
  4. Rhodes, C.J. Soil erosion, climate change and global food security: Challenges and strategies. Sci. Prog. 2014, 97, 97–153. [Google Scholar] [CrossRef] [PubMed]
  5. Prăvălie, R. Exploring the multiple land degradation pathways across the planet. Earth-Sci. Rev. 2021, 220, 103689. [Google Scholar] [CrossRef]
  6. Bhattacharyya, R.; Ghosh, B.N.; Mishra, P.K.; Mandal, B.; Rao, C.S.; Sarkar, D.; Das, K.; Anil, K.S.; Lalitha, M.; Hati, K.M.; et al. Soil degradation in India: Challenges and potential solutions. Sustainability 2015, 7, 3528–3570. [Google Scholar] [CrossRef]
  7. Lal, R.; Bouma, J.; Brevik, E.; Dawson, L.; Field, D.J.; Glaser, B.; Hatano, R.; Hartemink, A.E.; Kosaki, T.; Lascelles, B.; et al. Soils and sustainable development goals of the United Nations: An International Union of Soil Sciences perspective. Geoderma Reg. 2021, 25, e00398. [Google Scholar] [CrossRef]
  8. Mikhailova, E.A.; Zurqani, H.A.; Lin, L.; Hao, Z.; Post, C.J.; Schlautman, M.A.; Shepherd, G.B. Opportunities for monitoring soil and land development to support United Nations (UN) Sustainable Development Goals (SDGs): A Case study of the United States of America (USA). Land 2023, 12, 1853. [Google Scholar] [CrossRef]
  9. Mikhailova, E.A.; Post, C.J.; Nelson, D.G. Integrating United Nations Sustainable Development Goals in Soil Science Education. Soil Syst. 2024, 8, 29. [Google Scholar] [CrossRef]
  10. Pandey, P.C.; Pandey, M. Highlighting the role of agriculture and geospatial technology in food security and sustainable development goals. Sustain. Dev. 2023, 31, 3175–3195. [Google Scholar] [CrossRef]
  11. Bouma, J. Contributing pedological expertise towards achieving the United Nations sustainable development goals. Geoderma 2020, 375, 114508. [Google Scholar] [CrossRef]
  12. Atukunda, P.; Eide, W.B.; Kardel, K.R.; Iversen, P.O.; Westerberg, A.C. Unlocking the potential for achievement of the UN Sustainable Development Goal 2–‘Zero Hunger’–in Africa: Targets, strategies, synergies and challenges. Food Nutr. Res. 2021, 65, 10–29219. [Google Scholar] [CrossRef] [PubMed]
  13. Olatunde, K.A. Soil Characterization Using Visible Near Infrared Diffuse Reflectance Spectroscopy (VNIR DRS). Ph.D. Thesis, University of Reading, Berkshire, UK, 2018. [Google Scholar]
  14. Luo, B.; Sun, H.; Zhang, L.; Chen, F.; Wu, K. Advances in the tea plants phenoty using hyperspectral imaging technology. Front. Plant Sci. 2024, 15, 1442225. [Google Scholar] [CrossRef]
  15. Piccini, C.; Metzger, K.; Debaene, G.; Stenberg, B.; Götzinger, S.; Borůvka, L.; Sandén, T.; Bragazza, L.; Liebisch, F. In-field soil spectroscopy in Vis–NIR range for fast and reliable soil analysis: A review. Eur. J. Soil Sci. 2024, 75, e13481. [Google Scholar] [CrossRef]
  16. Stenberg, B.; Rossel, R.A.V.; Mouazen, A.M.; Wetterlind, J. Visible and near infrared spectroscopy in soil science. Adv. Agron. 2010, 107, 163–215. [Google Scholar]
  17. Leone, A.P.A.; Viscarra-Rossel, R.; Amenta, P.; Buondonno, A. Prediction of soil properties with PLSR and vis-NIR spectroscopy: Application to mediterranean soils from Southern Italy. Curr. Anal. Chem. 2012, 8, 283–299. [Google Scholar] [CrossRef]
  18. Zhao, L.Y.M.H.; Zhou, W.; Liu, Z.-H.; Pan, Y.-C.; Shi, Z.; Wang, G.-X. Estimation methods for soil mercury content using hyperspectral remote sensing. Sustainability 2018, 10, 2474. [Google Scholar] [CrossRef]
  19. Jia, P.; Zhang, J.; He, W.; Hu, Y.; Zeng, R.; Zamanian, K.; Jia, K.; Zhao, X. Combination of hyperspectral and machine learning to invert soil electrical conductivity. Remote Sens. 2022, 14, 2602. [Google Scholar] [CrossRef]
  20. Jia, L.; Zu, W.; Yang, F.; Gao, L.; Gu, G.; Zhao, M. Estimating Organic Matter Content in Hyperspectral Wetland Soil Using Marine-Predators-Algorithm-Based Random Forest and Multiple Differential Transformations. Appl. Sci. 2023, 13, 10693. [Google Scholar] [CrossRef]
  21. Wu, B.; Yang, K.; Li, Y.; He, J. Hyperspectral Inversion of Heavy Metal Copper Content in Corn Leaves Based on DRS–XGBoost. Sustainability 2023, 15, 16770. [Google Scholar] [CrossRef]
  22. Zheng, M.; Luan, H.; Liu, G.; Sha, J.; Duan, Z.; Wang, L. Ground-based hyperspectral retrieval of soil arsenic concentration in Pingtan island, China. Remote Sens. 2023, 15, 4349. [Google Scholar] [CrossRef]
  23. Gao, Z.; Wang, W.; Wang, H.; Li, R. Selection of Spectral Parameters and Optimization of Estimation Models for Soil Total Nitrogen Content During Fertilization Period in Apple Orchards. Horticulturae 2024, 10, 358. [Google Scholar] [CrossRef]
  24. Song, Q.; Gao, X.; Song, Y.; Li, Q.; Chen, Z.; Li, R.; Zhang, H. Estimation and mapping of soil texture content based on unmanned aerial vehicle hyperspectral imaging. Sci. Rep. 2023, 13, 14097. [Google Scholar] [CrossRef] [PubMed]
  25. Zhou, W.; Li, H.; Wen, S.; Xie, L.; Wang, T.; Tian, Y.; Yu, W. Simulation of soil organic carbon content based on laboratory spectrum in the three-rivers source region of China. Remote Sens. 2022, 14, 1521. [Google Scholar] [CrossRef]
  26. Zhong, Q.; Eziz, M.; Sawut, R.; Ainiwaer, M.; Li, H.; Wang, L. Application of a hyperspectral remote sensing model for the inversion of nickel content in urban soil. Sustainability 2023, 15, 13948. [Google Scholar] [CrossRef]
  27. Subi, X.; Eziz, M.; Zhong, Q. Hyperspectral Estimation Model of Organic Matter Content in Farmland Soil in the Arid Zone. Sustainability 2023, 15, 13719. [Google Scholar] [CrossRef]
  28. Chen, S.; Gao, J.; Loum, F.; Tuo, Y.; Tan, S.; Shan, Y.; Luo, L.; Xu, Z.; Zhang, Z.; Huang, X. Rapid estimation of soil water content based on hyperspectral reflectance combined with continuous wavelet transform, feature extraction, and extreme learning machine. PeerJ 2024, 12, e17954. [Google Scholar] [CrossRef]
  29. Wang, S.; Guan, K.; Zhang, C.; Lee, D.; Margenot, A.J.; Ge, Y.; Peng, J.; Zhou, W.; Zhou, Q.; Huang, Y. Using soil library hyperspectral reflectance and machine learning to predict soil organic carbon: Assessing potential of airborne and spaceborne optical soil sensing. Remote Sens. Environ. 2022, 271, 112914. [Google Scholar] [CrossRef]
  30. Wang, H.; Zhang, L.; Zhao, J.; Hu, X.; Ma, X. Application of hyperspectral technology combined with genetic algorithm to optimize convolution long-and short-memory hybrid neural network model in soil moisture and organic matter. Appl. Sci. 2022 12, 10333. [CrossRef]
  31. Li, H.; Ju, W.; Song, Y.; Cao, Y.; Yang, W.; Li, M. Soil organic matter content prediction based on two-branch convolutional neural network combining image and spectral features. Comput. Electron. Agric. 2024, 217, 108561. [Google Scholar] [CrossRef]
  32. Toth, G.; Jones, A.; Montanarella, L.; Alewell, C.; Ballabio, C.; Carre, F.; De Brogniez, D.; Guicharnaud, R.A.; Gardi, C.; Hermann, T.; et al. LUCAS Topoil Survey—Methodology, Data and Results; Publications Office of the European Union: Luxembourg, 2013. [Google Scholar]
  33. Cao, L.; Sun, M.; Yang, Z.; Jiang, D.; Yin, D.; Duan, Y. A novel transformer-CNN approach for predicting soil properties from LUCAS Vis-NIR spectral data. Agronomy 2024, 14, 1998. [Google Scholar] [CrossRef]
  34. Liu, T.; Zhao, Y.; Wang, H.; Wu, W.; Yang, T.; Zhang, W.; Zhu, S.; Sun, C.; Yao, Z. Harnessing UAVs and deep learning for accurate grass weed detection in wheat fields: A study on biomass and yield implications. Plant Methods 2024, 20, 144. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, R.; Tao, F.; Liu, X.; Na, J.; Leng, H.; Wu, J.; Zhou, T. RAANet: A residual ASPP with attention framework for semantic segmentation of high-resolution remote sensing images. Remote Sens. 2022, 14, 3109. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of ReSE-AP Net.
Figure 1. Overall architecture of ReSE-AP Net.
Applsci 15 07457 g001
Figure 2. SE attention mechanism.
Figure 2. SE attention mechanism.
Applsci 15 07457 g002
Figure 3. Residual convolutional network structure.
Figure 3. Residual convolutional network structure.
Applsci 15 07457 g003
Figure 4. Pyramid pooling of hollow space.
Figure 4. Pyramid pooling of hollow space.
Applsci 15 07457 g004
Figure 5. Comparison chart of R2.
Figure 5. Comparison chart of R2.
Applsci 15 07457 g005
Figure 6. RMSE comparison chart.
Figure 6. RMSE comparison chart.
Applsci 15 07457 g006
Figure 7. Scatter plot of the fitting of the true values and predicted values.
Figure 7. Scatter plot of the fitting of the true values and predicted values.
Applsci 15 07457 g007
Table 1. Dataset statistics table.
Table 1. Dataset statistics table.
ElementSetSizeMeanStdMedianModeKurtosisIqr
Clay (%)Complete17,93918.8813.0017.004.000.6918.50
Test598018.8813.0117.004.000.6918.25
Train11,95918.8913.0017.004.000.6918.50
Silt (%)Complete17,93938.2318.3037.0032.00−0.5426.00
Test598038.2318.3037.0032.00−0.5426.00
Train11,95938.2318.3037.0032.00−0.5426.00
Sand (%)Complete17,93942.8826.1142.005.00−1.1045.00
Test598042.8726.1142.005.00−1.1045.00
Train11,95942.8926.1142.005.00−1.1045.00
PH in CaCl2Complete19,0315.591.435.647.36−1.302.68
Test63445.601.435.647.36−1.292.68
Train12,6875.591.435.657.36−1.302.68
PH in H2OComplete19,0316.201.356.217.76−1.242.45
Test63446.201.356.217.76−1.242.45
Train12,6876.201.356.217.76−1.242.45
OC (g/kg)Complete19,03149.9291.1920.8011.4013.5327.00
Test634449.3490.4020.6011.4013.9526.70
Train12,68750.2291.5820.8011.4013.3227.10
CaCO3
(g/kg)
Complete19,03151.61125.331.000.009.0712.00
Test634451.71125.591.000.009.0012.00
Train12,68751.56125.211.000.009.1012.00
N (g/kg)Complete19,0312.923.751.701.2016.851.70
Test63442.913.731.701.2016.311.70
Train12,6872.923.761.701.2017.111.70
P (mg/kg)Complete19,03130.0532.8122.300.00173.5032.00
Test634429.9832.6622.200.0071.4631.90
Train12,68730.0932.8822.400.00223.0932.00
K (mg/kg)Complete19,030196.99229.29136.400.00166.43176.80
Test6343195.13229.66135.500.00185.74174.40
Train12,687197.92229.11137.000.00156.71177.95
CEC
(cmol(+)/kg)
Complete19,03115.7514.4812.400.0034.0813.30
Test634415.7314.5512.400.0037.9913.30
Train12,68715.7614.4512.400.0032.0713.30
Table 2. Comparison table of R2 in the ablation experiments.
Table 2. Comparison table of R2 in the ablation experiments.
ModelClaySiltSandPH in CaCl2PH in H2OOCCaCO3NPKCEC
ResBlock0.81620.64080.74420.92100.91730.95740.94800.91010.31790.48100.7062
ResNet + SE0.84820.72550.79120.92880.92000.96130.95120.92240.35900.51000.8019
ReSE-AP Net0.86530.74670.80550.93840.92780.96560.96420.93930.41480.55680.8172
Table 3. Comparison table of RMSE in the ablation experiments.
Table 3. Comparison table of RMSE in the ablation experiments.
ModelClay
(%)
Silt
(%)
Sand
(%)
PH in CaCl2PH in H2OOC
(g/kg)
CaCO3 (g/kg)N
(g/kg)
P
(mg/kg)
K
(mg/kg)
CEC
(cmol(+)/kg)
ResBlock6.06510.47712.5360.4100.39618.83225.0861.12025.412170.1217.877
ResNet + SE5.4019.64311.9870.3850.37117.80824.4120.94124.634165.1956.553
ReSE-AP Net4.7739.20411.5110.3540.36416.87122.7110.92223.214149.4046.185
Table 4. Comparison table of R2.
Table 4. Comparison table of R2.
ModelClaySiltSandPH in CaCl2PH in H2OOCCaCO3NPKCEC
PLSR0.7630.5490.6490.8950.8860.9170.9030.8680.2770.3710.724
Ridge0.7460.5050.6100.8720.8640.9070.8830.8450.2490.3350.688
SVR0.7550.5220.5830.6220.6420.9370.9030.9020.1960.2770.738
RF0.5240.4160.4530.6100.6220.8740.6560.7970.0500.1390.529
XGboost0.5620.4230.4910.6840.7110.8930.7650.8100.0820.1610.561
VGG110.8550.7390.7850.9280.9200.9610.9590.9320.3640.4800.808
TCN0.8340.7050.7670.9380.9280.9550.9540.9260.2800.4690.769
ResNet1520.7340.5570.6180.8340.8490.9410.9420.8910.1600.3380.716
Transformer0.8500.7200.7700.9400.9300.9600.9600.9400.4100.6000.830
ReSE-AP Net0.8650.7470.8060.9380.9280.9660.9640.9390.4150.5570.817
Table 5. RMSE comparison table.
Table 5. RMSE comparison table.
ModelClay
(%)
Silt
(%)
Sand
(%)
PH in CaCl2PH in H2OOC
(g/kg)
CaCO3
(g/kg)
N
(g/kg)
P
(mg/kg)
K
(mg/kg)
CEC
(cmol(+)/kg)
PLSR6.33512.28115.4610.4610.45626.32539.1031.35927.067188.2927.608
Ridge6.55812.86916.2930.5110.49827.91142.9151.47527.593193.5998.094
SVR6.44212.65316.8600.8770.80922.85039.1211.17328.551201.8087.416
RF8.97613.97919.3050.8900.83232.46673.7461.68631.030220.2219.939
XGBoost8.60713.90118.6280.8010.72729.92160.7761.63130.506217.3509.600
VGG114.9519.35512.1010.3830.38117.92325.4940.97725.385171.0636.344
TCN5.3049.93312.5980.3560.36319.31026.8001.01627.010172.9626.963
ResNet1526.70912.17416.1220.5790.54321.99430.3771.23729.176193.0327.598
Transformer4.8609.40012.0800.4000.39022.80031.6401.09028.040160.1307.330
ReSE-AP Net4.7739.20411.5110.3540.36416.87122.7110.92223.214149.4046.185
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, Y.; Cao, Y.; Chen, S.; Cheng, X. Residual Attention Network with Atrous Spatial Pyramid Pooling for Soil Element Estimation in LUCAS Hyperspectral Data. Appl. Sci. 2025, 15, 7457. https://doi.org/10.3390/app15137457

AMA Style

Deng Y, Cao Y, Chen S, Cheng X. Residual Attention Network with Atrous Spatial Pyramid Pooling for Soil Element Estimation in LUCAS Hyperspectral Data. Applied Sciences. 2025; 15(13):7457. https://doi.org/10.3390/app15137457

Chicago/Turabian Style

Deng, Yun, Yuchen Cao, Shouxue Chen, and Xiaohui Cheng. 2025. "Residual Attention Network with Atrous Spatial Pyramid Pooling for Soil Element Estimation in LUCAS Hyperspectral Data" Applied Sciences 15, no. 13: 7457. https://doi.org/10.3390/app15137457

APA Style

Deng, Y., Cao, Y., Chen, S., & Cheng, X. (2025). Residual Attention Network with Atrous Spatial Pyramid Pooling for Soil Element Estimation in LUCAS Hyperspectral Data. Applied Sciences, 15(13), 7457. https://doi.org/10.3390/app15137457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop