Next Article in Journal
Codes over the Dickson Near-Field of Order Nine
Previous Article in Journal
When Does Platform Private-Label Advertising Work? The Role of Quality and Supply Chain Structure
Previous Article in Special Issue
De Novo Single-Cell Biological Analysis of Drug Resistance in Human Melanoma Through a Novel Deep Learning-Powered Approach
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PLB-GPT: Potato Late Blight Prediction with Generative Pretrained Transformer and Optimizing

1
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 211800, China
2
Labs of Advanced Data Science and Service, Nanjing Agricultural University, Nanjing 211800, China
3
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
4
State Key Laboratory of Agricultural and Forestry Biosecurity, College of Plant Protection, Nanjing Agricultural University, Nanjing 211800, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(2), 225; https://doi.org/10.3390/math14020225
Submission received: 6 December 2025 / Revised: 28 December 2025 / Accepted: 30 December 2025 / Published: 7 January 2026
(This article belongs to the Special Issue Computational Intelligence for Bioinformatics)

Abstract

Potato late blight is a devastating disease and threatening global potato production, necessitating accurate early prediction for effective management and yield enhancement.This paper presents the PLB-GPT, a novel generative pre-trained transformer-based model built on GPT-2 architecture, designed to forecast late blight outbreaks using meteorological data. Our method is trained and evaluated on a real-world dataset encompassing temperature, humidity, atmospheric pressure, and other climatic variables from diverse regions of China; PLB-GPT demonstrates state-of-the-art performance. The framework of PLB-GPT employs advanced fine-tuning strategies, including Linear Probing, Full Fine-Tuning, and a novel two-stage method, effectively applied across different time windows (1-day, 3-day, 5-day, 7-day). The model achieves an accuracy of 0.8746, a precision of 0.8915, and an F1 score of 0.8472 in the 5-day prediction window, surpassing baseline methods such as CARAH, ARIMA, LSTM, and Informer. These results highlight PLB-GPT as a robust tool for early disease outbreak prediction, with significant implications for agricultural disease management.

1. Introduction

The potato (Solanum tuberosum) is one of the world’s most significant food crops, ranking fourth globally, after maize, rice, and wheat [1,2,3,4,5,6]. It provides essential nutrients and plays vital role in global food security. In many developing countries, the potato has become indispensable, particularly in regions where arable land is limited and population growth is rapid [7,8,9]. However, its production is facing significant threats from late blight, a devastating disease caused by the oomycete pathogen Phytophthora infestans.
Late blight spreads rapidly with suitable humid and temperate wather [10], causing severe damage to potato foliage and tubers, which may yield losses reaching up to 70% in severe cases. Historically, the disease triggered the Irish Potato Famine in the 19th century, leading to over a million deaths [11]. Today, late blight continues to cause global crop losses exceeding USD 6 billion annually, significantly impacting major potato-producing nations such as China, India, and Russia [12]. These challenges underscore the urgent need for effective, timely prediction and management strategies to mitigate economic and food security risks.
Traditionally, late blight prediction models rely on meteorological data, such as temperature, humidity, and rainfall, to estimate disease risk [13]. While these models have facilitated early prediction efforts, they often fail to capture the non-linear interactions among environmental variables, pathogen dynamics, and evolving crop resistance [14]. Additionally, the unpredictability of climate change and the need for real-time data integration pose significant challenges [15]. Traditional time-series models, such as Autoregressive Integrated Moving Average Model(ARIMA), have been widely applied for forecasting due to their simplicity and interpretability. However, these models struggle to handle complex, non-linear relationships and often underperform with datasets characterized by high variability or long-term dependencies [16,17]. On the other hand, deep learning models like Long Short-Term Memory(LSTM) provide a more robust framework by capturing sequential patterns and non-linearities effectively. Despite these advantages, LSTMs require substantial amounts of labeled data, are computationally intensive, and may face challenges in generalizing across diverse datasets [18]. Furthermore, both the ARIMA and LSTM models are limited by their reliance on task-specific designs and lack the flexibility to integrate multi-modal data sources [19].
Recent advancements in artificial intelligence offer promising solutions to these challenges. Generative pre-trained models, such as Generative Pretrained Transformer (GPT), have demonstrated excellent capabilities in time-series data analysis [20]. The GPT models were initially developed for natural language processing for identifying intricate, multi-scale relationships, making them particularly well-suited for agricultural applications [21,22]. By integrating diverse data sources, including meteorological and geospatial information, GPT-based frameworks address the limitations of traditional methods, offering robust, scalable, and precise solutions for challenges like disease prediction, crop yield estimation, and resource optimization. Additionally, the generative models offer more flexibility, making them well-suited for transfer learning and fine-tuning across different application domains. This adaptability is particularly beneficial in agriculture, where conditions vary significantly [23,24].
Moreover, these models have a demonstrated superior performance in handling dynamic and imbalanced datasets, a common issue in agricultural forecasting. Studies have shown that the pre-trained transformers effectively mitigate overfitting in low-sample scenarios, a critical advantage for regions where historical disease data are sparse [25]. This adaptability makes them highly suitable for decision support systems in agriculture, including precision farming and disease outbreak monitoring.
In this paper, we propose Potato Late Blight prediction with Generative Pretrained Transformer and optimizing, called PLB-GPT, which is a generative pre-trained transformer-based model, to predict the late blight outbreaks by leveraging geographic and meteorological time-series data. The PLB-GPT integrates the fine-tuning strategies with a real dataset encompassing temperature, humidity, atmospheric pressure, and other climatic features from diverse potato-growing regions in China.
Specifically, we employ a novel two-stage fine-tuning strategy with linear probing followed by full fine-tuning to enhance the adaptability and performance across varying length of time windows. The main contributions of this paper are summarized as follows:
  • We developed a generative pretrained transformer-based model for early prediction of potato late blight, capable of dynamically integrating meteorological and historical disease data.
  • A two-stage fine-tuning strategy is applied to improve model adaptability and performance across multiple time windows, achieving state-of-the-art results.
  • Extensive experiments are conducted to evaluate it on a real dataset, comparing PLB-GPT to traditional and deep learning-based baseline models, showing that it demonstrates superior accuracy, precision, and generalization across various time windows.

2. Related Work

Potato late blight prediction has advanced significantly, evolving from basic observation-based methods to sophisticated meteorological models. Early approaches relied on static environmental factors, such as temperature and rainfall, to estimate disease risk [26]. While effective for initial predictions, these models lacked adaptability to rapid environmental changes and the capacity to handle complex, nonlinear interactions among multiple variables—limiting their reliability under dynamic field conditions.
Dynamic models, such as those incorporating real-time disease severity data [27], marked a turning point by offering better insights into disease progression during the growing season. The CARAH model [28] further improved precision by integrating real-time weather data, enabling targeted fungicide application and enhanced management strategies. However, as a rule-based system with fixed thresholds, CARAH struggles to generalize across diverse agroclimatic zones and cannot learn from historical outbreak patterns beyond predefined logic.
Recent advancements have seen the adoption of machine learning techniques, which significantly extend the capabilities of traditional models. Fenu et al. [29] used SVM to predict disease outbreaks in the Sardinia region, demonstrating its ability to capture complex relationships between variables, including non-linear interactions that are challenging for traditional statistical models. Similarly, Gu et al. [30] showed that SVR outperforms traditional methods, highlighting its broader applicability in late blight forecasting and its potential to handle diverse data sources, such as soil, plant, and climate variables. Nevertheless, these shallow models lack temporal memory and are typically trained on limited, region-specific datasets, hindering their transferability and long-horizon forecasting performance.
Hybrid models that combine meteorological data with machine learning frameworks, such as Artificial Neural Networks, have further improved prediction accuracy. For instance, UAVs combined with machine learning algorithms have been employed to assess disease severity and predict outbreaks at a large scale [31], allowing farmers to optimize resource allocation and minimize losses. Localized approaches, like INDO-BLIGHTCAST [32], have also emphasized the value of regional agroecological factors for tailored disease management, showcasing how integrating region-specific data can improve the precision and relevance of predictions.
Currently, the Generative Pre-Trained Models were initially designed for natural language processing and have recently been adapted for time-series data and agricultural forecasting. Compared with conventional statistical or recurrent time-series models that often suffer from horizon instability, memory decay, and sensitivity to covariate shift, GPT-based architectures exhibit stronger long-context modeling capability and robustness under non-stationary conditions. These models excel at capturing complex temporal patterns, making them well-suited for predicting disease outbreaks that depend on both historical trends and future environmental conditions. Jin et al. [33] introduced the Time-LLM framework, adapting GPT models for time-series forecasting by reprogramming inputs to align with textual data structures. This approach was particularly successful in capturing long-term dependencies in meteorological data, which improved the accuracy of disease outbreak predictions. Similarly, Zhao et al. [34] demonstrated that GPT-4 could manage time-series data with minimal task-specific training, suggesting its potential in agricultural forecasting. Gruver et al. [35] explored the use of GPT-3 and LLaMA-2 for zero-shot time-series prediction, showing that these models could outperform traditional forecasting models even with limited retraining. However, existing GPT-based approaches largely treat time-series as generic sequences, without explicitly incorporating disease-specific dynamics, such as the interaction between meteorological anomalies and historical infection records, and often overlook the need for robust normalization mechanisms to handle the distribution shifts commonly observed in agricultural data. The application of GPT models in agriculture is still nascent, but the potential for integrating these models with historical meteorological and disease data to enhance prediction accuracy is significant.
To explicitly highlight the distinctions between existing approaches and our proposed model, Table 1 provides a structured comparison. It summarizes representative methods in terms of input types, modeling techniques, generalization capability, and whether generative modeling was employed. As Table 1 shows, PLB-GPT is the first to unify meteorological observations with historical disease incidence within a two-stage generative pretraining framework, leveraging Transformer-based sequence modeling for improved generalizability and robustness.

3. Methodology

The PLB-GPT prediction framework integrates time-series forecasting techniques with fine-tuning strategies within generative pre-trained models. The framework of PLB-GPT is designed to leverage the powerful sequence-processing capabilities of GPT models, applying natural language processing strategies to efficiently and accurately predict potato late blight outbreaks.

3.1. Infection Cycle of Potato Late Blight

Epidemiological studies on potato late blight indicate that weather conditions such as temperature and humidity are crucial for its occurrence [9]. The pathogenic pathogen of Phytophthora infestans forms sporangia on potato plants when the relative humidity exceeds 90% and the temperature ranges between 3 and 26 °C [4]. After spore sac germination, at temperatures below 18 °C, six to eight motile spores are produced, which spread into the air and are carried by the wind to other plants [3]. These motile spores require water to move, and prolonged moist conditions can significantly enhance the spread of late blight [30].
The infection cycle of late blight is driven by favorable environmental conditions, which increases the chances of sporangia germination and subsequent dispersal. Figure 1a illustrates an example of late blight and Figure 1b presents the infection cycle of late blight.

3.2. Model Overview

The framework of PLB-GPT for potato late blight prediction is based on the generative pre-training model and shown in Figure 2. Firstly, preprocess the meteorological data, which includes data cleaning and meteorological feature construction. Perform a time series alignment on the processed meteorological data to better align the generative pre training model with the features of the time series data. Subsequently, input the sequence data into a generative pre-training model for prediction.
This paper applies a two-stage fine-tuning strategy for the model: linear detection and comprehensive fine-tuning. In addition, this paper introduces channel independence and partitioning techniques, treating different meteorological indicators as independent features and partitioning sequence lengths to optimize the processing efficiency of long-term series data.

3.3. Data Preprocessing

The preprocessing stage is critical to ensuring the completeness, accuracy, and consistency of the data, providing a clean and usable dataset as the foundation for model development. The preprocessing steps focus on data cleaning and feature construction from meteorological data, ensuring that all relevant factors are adequately captured.

3.3.1. Data Cleaning

During data preprocessing, particular attention was paid to ensuring the integrity and accuracy of the dataset. This involves handling missing data, removing irrelevant columns, and converting data types as necessary to improve their expressiveness and usability for subsequent analyses.
To address missing temperature and humidity values, we used an imputation strategy based on the average values of the same month and location to improve the model performance; the strategy is represented by Equation (1):
T fill = 1 n i = 1 n T i , H fill = 1 n i = 1 n H i ,
where T fill and H fill are the filled temperature and humidity values, and T i and H i represent the observed values for the same month and location; n is the number of valid observations in the dataset.
For missing precipitation data, which typically indicates no rainfall in agricultural datasets, we replace the missing precipitation values with zeros.
Additionally, temporal features, such as the integration of year, month, and day components, are combined into a single date–time object to improve the efficiency of data processing and facilitate time-series analysis. This approach not only simplifies subsequent computations but also allows for the application of sophisticated time-based analytical techniques to extract temporal patterns and trends.

3.3.2. Meteorological Feature Construction

Feature engineering plays a critical role in enhancing model performance by transforming raw meteorological variables into informative predictors that capture underlying environmental dynamics. In this paper, we construct three meteorological features that are closely associated with the onset and spread of potato late blight.
(1) Temperature fluctuation index quantifies the diurnal temperature range by computing the difference between the daily maximum and minimum temperatures. Fluctuations in temperature can influence pathogen development and plant susceptibility, making this indicator particularly relevant for disease modeling.
(2) Precipitation is directly derived from raw measurements in millimeters. Rainfall serves as a primary driver of late blight epidemics. Continuous precipitation data enable the model to capture subtle variations in surface moisture, spore dispersal conditions, and infection risk.
(3) Relative humidity is incorporated as a continuous percentage value, recorded hourly. Unlike qualitative humidity levels, this quantitative feature allows the model to learn non-linear effects of air moisture on pathogen sporulation and disease development more effectively.
By constructing and incorporating these domain-informed features, the model is provided with input that is both more predictive and interpretable. This step substantially improves the model’s ability to capture complex interactions between meteorological conditions and disease dynamics, thereby enhancing overall prediction accuracy for potato late blight.

3.4. Time Series Alignment and Model Structure Optimization

Currently, time-series forecasting is a crucial method of machine learning in disease prediction. While Transformer models are widely considered as a good means of end-to-end time-series analysis, models based on Convolutional Neural Networks or Recurrent Neural Networks are often preferred architectures in self-supervised learning for time-series tasks [36]. However, Transformers exhibit a unique ability to model long-range dependencies and capture complex patterns, which aligns well with the intricate sequential relationships in time-series data.
At present, existing large-scale generative pre-trained models are typically pre-trained on extensive, general-purpose language corpora, which limits their applicability to domain-specific tasks. These models are unable to directly learn specialized knowledge that lies beyond ordinary language use [37].
To address this issue, this paper introduces a Time-Series Alignment stage, aiming at adapting generative pre-trained models to better align with the characteristics of time-series data [38]. As the generative pre-trained models are inherently autoregressive, this alignment stage employs an autoregressive training approach similar to the models’ original pre-training, ensuring consistency and maximizing the model’s adaptability and predictive capability for time-series data [39].
Assume the meteorological time series X = x 1 , x 2 , . . . , x T R d × T , where d denotes the number of meteorological features and T is the sequence length.
In the Time-Series Alignment stage, the token embedding layer of GPT-2 is replaced with a 1D convolutional encoder f c o n v , which maps each local window of the time series to a hidden representation, as shown in Equation (2):
e t = f c o n v x t w + 1 : t R h , t = w , w + 1 , . . . , T
where w is the convolution kernel width and h = 768.
Subsequently, the model is trained autoregressively to predict future embeddings, with the alignment objective given by Equation (3):
L a l i g n = t = w + 1 T log p θ e t | e w , . . . , e t 1
thereby preserving the original causal structure of the pre-trained language model.
We used the GPT-2-small version, which consists of 12 Transformer decoder layers, each with a hidden size of 768, and 12 self-attention heads, totaling approximately 124 million parameters. The Adam optimizer was used, with a learning rate of 1 × 10 5 and a batch size of 32 during the fine-tuning phase, and the training was performed up to 50 epochs with early stopping (patience = 5) based on the validation loss.
To accommodate time-series input, we replaced the original token embedding layer with a 1D convolutional layer, enabling the model to process multivariate meteorological sequences instead of textual tokens. All other components of the GPT-2 architecture remained unchanged to retain the benefits of pretrained representations.
As shown in Figure 3, the processing of the generative pre-trained model for time-series forecasting includes two stages:
  • Time-Series Alignment stage: The model aligns the pre-trained weights with segmented time-series data while preserving the autoregressive nature of the original model. This process enhances the model’s ability to capture temporal dependencies and sequential patterns.
  • Model Fine-Tuning stage: Following the alignment stage, the model undergoes a fine-tuning process, initially unfreezing only the output layer for linear probing. Subsequently, all layers of the model, including any PEFT(Parameter-Efficient Fine-Tuning) components, are fully fine-tuned.

3.4.1. Normalization Processing

Normalization is critical for adapting pre-trained models to different modalities, ensuring stable performance. Based on layer normalization within the generative pre-trained model, this paper introduces instance normalization (IN) to improve consistency and reliability when processing diverse time-series datasets. Given the input time-series sample, the instance normalization is applied to generate normalized samples with zero mean and unit variance, as shown in Equation (4):
x ^ t = x t μ t σ t
where x t denotes the raw value of the input time series at time step t, μ t and σ t are the mean and standard deviation computed for normalization, and x ^ t is the normalized value with zero mean and unit variance.
Additionally, to enhance prediction accuracy, reversible instance normalization (RevIN) is employed in the output structure. RevIN involves instance normalization and subsequent de-normalization with shared trainable affine transformations, addressing the distribution shift between training and test data [40]. As shown in Equation (5), this method is applied during the tokenization of the meteorological time series data.
p = patching CI RevIN norm x in
where x in is the input meteorological time series, RevIN norm ( · ) denotes the normalization step of reversible instance normalization, CI ( · ) represents the channel independence transformation, patching ( · ) refers to the segmentation of the time series into fixed-length chunks (tokens), and p is the resulting sequence of processed tokens fed into the model.
This normalization process ensures that data collected from different weather stations or time periods during the potato late blight prediction task can be analyzed and trained under unified standards. This improves the model’s ability to handle the meteorological time series data, enhancing both accuracy and generalization performance [41].

3.4.2. Channel Independence and Chunking Techniques

Channel independence has proven to be an effective strategy in reducing model complexity and improving processing speed, especially in computer vision tasks involving image data. By decomposing multichannel data into single-channel segments, this approach simplifies data structures and reduces computational overhead [42]. Applying this strategy to time series analysis enables the model to process multivariate time series data more efficiently, with each variable being processed independently, improving training efficiency. In addition, the chunking technique is widely used in signal processing and data compression; this involves splitting continuous data into smaller chunks or segments. Each chunk is treated as a unit for processing, reducing the amount of data for the model. In deep learning, chunking helps manage long input sequences and overcomes the high computational demands of long-term time series forecasting [43].
For the potato late blight prediction task, chunking allows for more efficient processing of long sequences of weather and disease data, as shown in Equation (6). The channel independence method transforms multivariate weather variables (e.g., temperature, humidity, rainfall) into separate univariate time series data streams, which are further processed into chunks. Each chunk is then treated as a token input for the model [44].
x ^ c h u n k = i = 1 N f ( x i )
where x i is the i-th univariate time series after channel independence, f ( · ) is the chunking function, N is the number of meteorological variables, and x ^ c h u n k is the resulting chunked input representation.
This method reduces the dimensionality and computational complexity of the data, allowing for efficient time-series forecasting.

3.4.3. Encoding Layer Optimization

One of the key challenges in adapting generative pre-trained models to time-series data is the mismatch between the token encoding layer and the vector-form time series data. To address the issues of handling multi-scale temporal information, this paper used a new time encoding layer [45]. In traditional NLP tasks, token encoding typically uses a trainable lookup table to map scalar tokens to high-dimensional spaces, and this is unsuitable for meteorological time series data. Therefore, this paper replaces the original token encoding layer with a 1D convolutional layer, as shown in Equation (7), which better preserves local semantic information within the time-series data.
h ^ t = Conv 1 D ( x t )
where x t is the input time-series segment at time step t, Conv 1 D ( · ) denotes a one-dimensional convolutional operation, and h ^ t is the resulting hidden representation.
To capture the multi-scale temporal properties of time-series data, a two-level hierarchical aggregation approach is adopted, as shown in Figure 4. At the first level, the input meteorological time series is segmented into multiple temporal resolutions (e.g., 1-day, 3-day, 5-day, and 7-day intervals), and each segment is mapped into a high-dimensional embedding space. The column vectors illustrated in Figure 4 represent feature embeddings at individual time steps, where the numerical values are schematic examples of different meteorological variables rather than exact physical measurements. Embeddings derived from different temporal windows are aggregated through element-wise summation, enabling the model to integrate information across multiple time scales.
At the second level, embeddings within each temporal chunk are further aggregated via a pooling operation to obtain a fixed-length representation. This hierarchical design allows the model to preserve fine-grained temporal patterns at shorter horizons while progressively summarizing longer-term trends, thereby enhancing its ability to model long-range dependencies in time-series forecasting tasks [46].

3.4.4. Output Layer Optimization

In the output layer, the high-dimensional embeddings generated by the model are flattened to enable further processing by linear layers. This step converts multi-dimensional feature vectors into a 1D format, allowing for the effective prediction of future data points. The flattened embeddings, denoted by h t , are then passed through a linear output layer to map them back to the segmented time series data format, as shown in Equation (8).
y ^ t = Linear ( h t )
where h t is the flattened embedding and y ^ t is the output of the linear projection layer.
For the task of potato late blight prediction, the sigmoid activation function is applied to ensure that the output values fall within the [ 0 , 1 ] range, representing the probability of late blight occurrence [47]. Finally, the RevIN is applied to the rearranged data, ensuring that the predicted data distribution closely matches the original data distribution during training, enhancing the model’s real-world prediction accuracy.
p ^ t = σ ( y ^ t )
where σ ( · ) is the sigmoid activation function and p ^ t [ 0 , 1 ] represents the predicted probability of late blight occurrence.

3.5. Model Fine-Tuning

To adapt the generative pre-trained model for the task of potato late blight prediction, a two-stage fine-tuning process is adopted: the linear probing and full fine-tuning. Each stage plays a critical role in ensuring the model’s adaptability and precision for the forecasting task.

3.5.1. Linear Probing

Linear probing is a lightweight fine-tuning strategy designed to quickly evaluate the model’s ability to adapt to the new task while minimizing the computational resources required. In this stage, the majority of the pre-trained model’s parameters are frozen, and only the final linear output layer is trained. This strategy preserves the knowledge acquired during pre-training while enabling efficient task-specific adaptation [23].
During this stage, the output layer is trained using the task-specific dataset, optimizing for the classification or regression error based on the true labels. This step is essential for aligning the model’s predictions with actual disease occurrence rates while preserving the generalization capabilities of the pre-trained layers. Techniques such as weight-freezing and task-specific layer updates have been shown to improve both computational efficiency and transfer learning performance [48].

3.5.2. Full Fine-Tuning

After linear probing, a full fine-tuning was applied, where all pre-trained layers were unfrozen, allowing for a more comprehensive update of the model’s parameters.
In this stage, the model is fine-tuned for optimal performance on the potato late blight prediction task, leveraging both pre-trained knowledge and task-specific data. The gradual unfreezing of layers ensures that the model retains the generalization capabilities learned during pre-training while adapting to the complexities of prediction. By fine-tuning on domain-specific datasets, the model can improve its capacity to capture the nuanced patterns associated with weather, soil, and crop conditions, as highlighted by Kuska et al. [21].

3.5.3. Two-Stage Fine-Tuning

The two-stage fine-tuning approach combines the advantages of both linear probing and full fine-tuning. First, linear probing is performed to quickly adapt the model to the new task, followed by full fine-tuning to optimize the model’s overall performance.
This approach emphasizes the importance of retaining foundational knowledge during task adaptation [24]. By starting with lightweight adjustments and gradually increasing the scope of fine-tuning, the model effectively balances efficiency and accuracy, ensuring robust performance for late blight prediction.

4. Experimental Results and Analysis

4.1. Dataset Description and Application Context

The dataset employed in this study comprises high-resolution meteorological and disease outbreak records collected across multiple agroecological zones in China, including Yunnan, Inner Mongolia, Hubei, Guizhou, Gansu, and Chongqing. These regions span a broad climatic gradient—from temperate plateaus to humid subtropical basins—ensuring the inclusion of diverse weather patterns relevant to potato cultivation and the development of late blight. A total of 54 meteorological monitoring stations were distributed across 24 cities of 6 provinces. The geographical distribution of these monitoring sites is illustrated in Figure 5.
The meteorological data were obtained via authorized access to regional agricultural meteorological monitoring systems maintained by provincial agricultural bureaus. Specifically, QD-3340MV wireless automatic weather stations, manufactured by Beijing Huisi Junda Technology Co., Ltd. (Beijing, China) were deployed at field level in high-risk zones for real-time, multi-variable sensing. These stations are equipped to capture hourly data on temperature (current, maximum, minimum), relative humidity, dew point, pressure, precipitation, wind speed and direction, and sunshine duration. A representative in-field deployment is shown in Figure 6, where an in-field solar-powered station is accompanied by local field management.
Disease outbreak records were collected from provincial plant protection stations, the trained agronomists conducted field surveys to record late blight occurrence, severity index (0–9), affected acreage, cultivar type, and crop growth stage. These observations were timestamped and manually aligned with the corresponding hourly meteorological data to ensure temporal consistency.
The final dataset covers a continuous period from January 2016 to December 2022, spanning seven growing seasons and including multiple regional epidemic cycles. After preprocessing (deduplication, missing value imputation, and outlier filtering using a 1.5 × IQR rule), the dataset comprised 32,432 hourly records. Approximately 18.7% of these records are labeled as late blight positive (binary label: 1 for outbreak, 0 otherwise). A temporal split was used for model development: 70% of the data (2016–2020) for training, 20% (2022) for validation, and 10% (2022) for testing. This setup enables the model to learn from long-term seasonal patterns while assessing generalization on unseen temporal conditions.
While the current study focuses on retrospective prediction, PLB-GPT is designed for future integration into real-time decision-support platforms. As illustrated in Figure 6, field-deployed sensing stations already serve as data collection terminals. With minimal infrastructure upgrade, these stations can serve as front-end nodes for mobile- or cloud-based applications powered by PLB-GPT. Such systems would provide farmers with actionable alerts through smartphones or web interfaces, enabling timely intervention. Although deployment on UAVs or agricultural robots is not explored in this work, our modular architecture allows flexible adaptation for such platforms in future extensions.
Table 2 provides representative samples of the hourly meteorological monitoring records used in this study. The examples highlight both ordinary weather conditions without disease occurrence (e.g., 11 June 2020) and an outbreak case (15 July 2020), where high humidity (91%), elevated temperature (24 °C), and sustained precipitation (8.4 mm) coincided with a late blight event. These records illustrate the diversity of variables captured, including temperature, humidity, dew point, pressure, precipitation, and wind characteristics, and their direct association with binary disease labels. The table is not exhaustive but demonstrates the data structure and the type of meteorological–disease correspondences on which PLB-GPT is trained. A preliminary inspection of the dataset suggests that variables such as relative humidity, dew point, and precipitation exhibit the strongest association with late blight outbreaks, whereas temperature and wind conditions interact with these factors to modulate disease risk. A more systematic correlation analysis will be pursued in our future work.

4.2. Experimental Setup

The experiments were conducted on a Linux system with dual NVIDIA H100 GPUs for model training, using PyTorch 1.13.1 as the deep learning framework with CUDA 11.6 acceleration. During full fine-tuning, the peak GPU memory consumption was approximately 42 GB. Training required substantial resources but was performed offline as a one-time cost per model version.
The total fine-tuning time was 1 h 26.47 min, leveraging the high memory bandwidth and compute throughput of H100 GPUs. This efficiency makes the model amenable to frequent re-training or adaptation in practical deployments. The evaluation and inference were performed with an NVIDIA A40 GPU. The average inference time for a single input sample was less than 10 s, demonstrating its suitability for near real-time usage.

4.3. Results Evaluation

To evaluate the performance of PLB-GPT, the following classification metrics were used: accuracy, precision, recall, and F1-score. Accuracy measures the ratio of correctly predicted instances to the total number of instances. Precision evaluates the proportion of true positive predictions among all predicted positives. Recall reflects the proportion of actual positive cases correctly identified by the model. The F1-score, as the harmonic mean of precision and recall, provides a balanced metric when both false positives and false negatives are costly. These metrics offer a comprehensive view of the model’s predictive performance under various error trade-offs.
It is important to note that the choice of forecast horizon directly shapes the nature and feasibility of agricultural interventions and risk management strategies. Short-term forecasts (e.g., 1-day or 3-day ahead) allow for highly responsive actions with minimal uncertainty, but the limited lead time often restricts farmers to emergency measures—such as spot spraying—that are less efficient for large-scale operations. In contrast, longer horizons (e.g., 7-day ahead) enable proactive planning, including fungicide procurement, labor scheduling, and field preparation, which are essential for cost-effective, preventive disease control.
However, the extended lead time also increases exposure to forecasting uncertainty, potentially resulting in either unnecessary applications or delayed responses. A medium-range horizon (around 5-day) typically offers a practical compromise: it provides sufficient time to organize targeted, preventive fungicide applications while remaining close enough to the event to support reliable decision-making. This balance aligns well with the biological dynamics of potato late blight and supports integrated disease management that optimizes both crop protection and resource use.

4.4. Data Preprocessing

To ensure consistency in the scale and distribution of the input data, instance normalization was applied to all features. This normalization process standardizes each feature by adjusting its mean to zero and its standard deviation to one. By normalization, the model’s stability and accuracy are improved during training, as the data are prepared in a form that facilitates model convergence and generalization. Figure 7 presents the distribution of key meteorological features after normalization.
The primary meteorological features of this dataset include temperature, humidity, maximum temperature, minimum temperature, pressure, and dew point. After normalization, each feature has a mean of zero and a standard deviation of one, ensuring comparability across different features. This standardization enhances the model’s ability to generalize and improves performance in subsequent analyses.
The length of time window is crucial to mining the temporal patterns for time-series prediction. Shorter windows, such as 1-day or 3-day windows, are suitable for tasks requiring rapid responses to changes in weather, such as short-term weather forecasts. In contrast, longer windows, such as 5-day or 7-day windows, are better suited for capturing long-term trends and seasonal variations, providing insights into broader climate influences on crop growth and disease development. Therefore, to capture the temporal dynamics effectively, we tested the meteorological features under different time windows. Figure 8 shows the periodic variations in meteorological features with different time window lengths.
1-Day Time Window: Figure 8a demonstrates the variation of meteorological features within a single day. A clear correlation is observed between temperature and dew point, as an increase in temperature typically accompanies a rise in dew point, indicating higher air moisture content. Humidity fluctuates significantly, especially toward the end of August and early September, where rapid increases are seen. The pressure drops consistently during this period, likely due to rainfall or changes in the weather system.
3-Day Time Window: In Figure 8b, a 3-day window is used to capture medium-term patterns. The smoothing effect of this window makes long-term trends more evident, reducing daily noise. Temperature and dew point continue to exhibit similar trends, while humidity shows pronounced fluctuations during late August and early September, reflecting a significant weather event, such as heavy rainfall. Pressure follows a downward trend, aligning with increased humidity, suggesting the arrival of a low-pressure system.
5-Day Time Window: Figure 8c displays data trends over a 5-day window, revealing mid- to long-term climate patterns. The close relationship between temperature and dew point suggests that temperature shifts significantly influence air moisture content. During mid- to late August, both temperature and dew point follow nearly identical trajectories, highlighting their interdependence. Humidity rises sharply toward the end of August, correlating with a drop in pressure, indicating a seasonal change or weather event.
7-Day Time Window: In Figure 8d, the 7-day window illustrates long-term climate variation. Humidity spikes significantly at the end of August, likely due to the influence of a large-scale weather system such as a typhoon or low-pressure area, which typically brings substantial rainfall and moisture. The corresponding pressure drop supports this hypothesis, indicating the presence of a low-pressure system. The consistent relationship between temperature and dew point further emphasizes the role of temperature in controlling atmospheric moisture levels.
Beyond the qualitative trends observed across different time windows, the feature interactions can be further interpreted from a quantitative correlation perspective. Consistent with the visual analysis, temperature and dew point exhibit a strong positive correlation across all window lengths, confirming their tight coupling in characterizing atmospheric moisture conditions. Humidity shows a moderate positive correlation with precipitation-related patterns, particularly in the 5-day and 7-day windows, reflecting sustained moisture accumulation that is favorable for late blight development. In contrast, atmospheric pressure demonstrates a weaker or negative correlation with moisture-related variables, providing complementary information rather than redundancy. These correlation patterns explain why longer aggregation windows amplify disease-relevant signals and motivate the use of channel-independent processing to preserve informative interactions while mitigating correlated noise.

4.4.1. Comparison of Fine-Tuning Methods

To evaluate the effectiveness of the LP-FT (Linear Probing followed by Full Fine-Tuning) two-stage fine-tuning strategy, a series of experiments were conducted to compare it with the traditional single-stage fine-tuning approach. These experiments were conducted with the primary goal of understanding the differences in model adaptation, convergence speed, and stability, as well as generalization ability, based solely on the training and validation loss.

4.4.2. Linear Probing

In the Potato Late Blight prediction task, linear probing involved fine-tuning only the output layer while keeping all other layers frozen. This approach takes advantage of the pretrained GPT-2 model’s internal representations without altering the majority of the network, allowing for rapid adaptation to the new task. By freezing the layers, the model focuses on tuning only the output weights to match the specifics of the new dataset. Table 3 outlines the key training parameters for linear probing.
Figure 9 illustrates the changes in training and validation loss during linear probing. The training loss, represented by the solid black line, shows a sharp decline in the first few iterations, reflecting the model’s quick adaptation to the task through adjustments to the output layer. This rapid initial drop is expected, as linear probing only updates a small portion of the model.
In contrast, the validation loss, represented by the gray dashed line, shows a similar rapid decline but also exhibits fluctuations during the mid-stage (iterations 5–7), indicating potential signs of overfitting. This is likely due to the fact that only the output layer is being fine-tuned, leaving the frozen layers unchanged, which limits the model’s ability to generalize effectively to unseen data.
Although linear probing demonstrates rapid convergence, the mid-stage fluctuations in validation loss suggest that this approach may benefit from additional regularization or moving to a more comprehensive fine-tuning strategy to avoid overfitting to the training data. This limitation becomes more pronounced when the task involves complex data interactions, such as those present in the prediction task, where dynamic weather conditions significantly influence disease outbreaks.

4.4.3. Full Fine-Tuning

In the full fine-tuning stage, all layers of the pretrained model are unfrozen, allowing for a more comprehensive optimization of the model’s internal representations. This strategy enables the model to better adapt its internal weights to the specific characteristics of the Potato Late Blight prediction task, which involves intricate relationships between meteorological factors and disease propagation. Table 4 outlines the key training parameters used during this stage.
Figure 10 shows the training and validation loss during full fine-tuning. The training loss exhibits a sharp decline during the initial iterations (1–5), similar to linear probing, but continues to decrease more gradually as the model fine-tunes its internal parameters throughout the network. This gradual reduction reflects the model’s ability to better capture complex patterns in the data, particularly those related to the dynamic environmental conditions that drive potato late blight outbreaks.
The validation loss also decreases significantly during the initial iterations, following a similar trend to the training loss. While the validation loss stabilizes during the later stages of training, it remains slightly higher than the training loss, which is indicative of good generalization performance. However, occasional fluctuations in the mid-stage (iterations 10–20) suggest that the model is still learning to balance task-specific features with the general pretrained knowledge.
Compared to linear probing, the full fine-tuning shows more gradual and sustained improvements in both training and validation loss, which suggests that it is better suited for tasks requiring deep integration of new task-specific information across all layers of the model.

4.4.4. Two-Stage Fine-Tuning

The two-stage fine-tuning strategy (LP-FT) combines the efficiency of linear probing with the comprehensive adaptation of full fine-tuning. This strategy begins with linear probing, allowing the model to quickly adapt its output layer, followed by full fine-tuning, which fine-tunes all layers to better align with the target task. The two-stage approach aims to strike a balance between fast convergence and effective generalization by progressively introducing complexity to the fine-tuning process. Table 5 summarizes the parameters for the LP-FT approach.
Figure 11 shows the training and validation loss curves during the two-stage fine-tuning process. In the linear probing stage (iterations 1–5), the training loss rapidly decreases as only the output layer is updated. As the model transitions into the full fine-tuning stage (iterations 6–30), the training loss continues to decrease at a slower rate as the model refines its internal representations. The validation loss follows a similar trend, with slight fluctuations during the transition between the two stages, but ultimately stabilizes, indicating strong generalization performance.
The two-stage fine-tuning approach demonstrates a balance between rapid initial adaptation and deep task-specific refinement. By leveraging both the efficiency of linear probing and the flexibility of full fine-tuning, the LP-FT method results in more stable training and better generalization performance compared to either method in isolation.

4.4.5. Comparison of Fine-Tuning Strategies

Figure 12 provides a comparison of the training and validation loss curves across the three fine-tuning strategies. The two-stage fine-tuning method (LP-FT) achieves the lowest validation loss overall, indicating superior generalization capability. Additionally, the training loss for LP-FT is consistently lower than that for full fine-tuning and linear probing, suggesting that the two-stage process allows the model to converge more effectively by gradually introducing task-specific adjustments.
Overall, the LP-FT approach outperforms both linear probing and full fine-tuning, achieving a more optimal balance between model stability and generalization performance. The combination of rapid initial adaptation and thorough parameter tuning across all layers makes LP-FT an efficient and effective strategy for fine-tuning large pretrained models like GPT-2 for domain-specific tasks.

4.5. Comparison with Baseline Models

For late-blight forecasting, we benchmarked PLB-GPT against four strong baselines:(1) CARAH, (2) ARIMA, (3) LSTM, and (4) Informer, on 4 different forecast horizons (H = 1-day, 3-day, 5-day, 7-day). Performance was scored with the standard quartet of classification metrics: Accuracy, Precision, Recall, and F1. This cross-horizon evaluation yields a fine-grained picture of each model’s short-term and medium-term predictive capacity, the results are presented in Table 6.

4.5.1. Performance and Practical Relevance

Table 6 documents a uniform lead for PLB-GPT over four baselines at every forecast horizon H { 1 , 3 , 5 , 7 } and on every metric (Accuracy, Precision, Recall, F1). Averaged over horizons, PLB-GPT improves Accuracy by + 4.9 pp on Informer, + 9.1 pp on LSTM, and + 19.2 pp on ARIMA. Its Precision margin peaks at the 5-day horizon (0.8915 vs. 0.8025 for the next best deep-learning model, i.e., Informer), indicating substantially fewer false alarms, which is critical when fungicide applications are costly.
Importantly, PLB-GPT maintains high Recall ( 0.8123 0.8564 ) across horizons, demonstrating robustness to the severe class imbalance inherent in disease outbreak data. The F1-score remains above 0.7732 even at the 1-day horizon and reaches 0.8472 at 5-day. This balanced precision–recall profile ensures both low missed-detection risk and minimal unnecessary interventions—key requirements for real-world agricultural decision support.

4.5.2. Robustness, Statistical Significance, and Stability

Late blight outbreaks represent a minority class in the dataset, making imbalance-aware interpretation essential. Under such conditions, Accuracy alone may obscure model behavior, whereas Recall and F1-score provide more informative signals for outbreak detection. Across all forecasting horizons, PLB-GPT maintains consistently higher Recall for the positive class, indicating a reduced risk of missed outbreak events. At the same time, its strong Precision suggests that this gain is not achieved at the cost of excessive false alarms. This balance is particularly important in agricultural practice, where false negatives can lead to severe yield loss, while unnecessary interventions increase economic and environmental costs.
All baselines, especially ARIMA, show monotonic degradation as H grows, reflecting a compounding error in autoregressive roll-outs. In contrast, PLB-GPT holds its scores steady (and even improves from 1-day to 5-day), suggesting that its weather-aware latent representation curbs horizon-specific drift.
We ran paired t-tests on each metric across the four horizons (Table 7). Applying a Bonferroni correction for four comparisons ( α corr = 0.05 / 4 = 0.0125 ), PLB-GPT significantly outperforms ARIMA, LSTM, and Informer on all metrics ( p < 0.0125 ). The mean F1 gain over CARAH ( + 5.9 pp) is practically relevant, though not significant under the corrected threshold ( p = 0.09 ), echoing CARAH’s strong short-horizon specialization.
Repeating the evaluation with three random data splits and five parameter seeds (not shown for brevity) yields an F1 standard deviation below 0.7 pp for PLB-GPT, versus 1.8–3.2 pp for LSTM and Informer, underscoring better robustness to data and initialization noise.
By integrating Reversible Instance Normalization, a transformer-style temporal encoder, and a two-stage LP → FT fine-tuning protocol, PLB-GPT establishes a new state-of-the-art for short-range and medium-range late-blight forecasting, which balanced a high precision–recall balance with robustness to horizon length, data split, and random seed variation.

4.6. Ablation Study

To clarify the individual contribution of each architectural component and the effect of alternative fine-tuning strategies, we performed two complementary sets of ablation experiments on the PLB-GPT framework. Unless otherwise stated, all models were trained on the same data split and evaluated with the above metrics.

4.6.1. Architectural Component Ablation

Table 8 reports the impact of removing RevIN or the temporal encoding layer. Eliminating RevIN sharply reduces the model’s ability to accommodate non-stationary climatic shifts, while suppressing the temporal encoding layer hinders the capture of both short-term and long-term dependencies. Either modification causes a double-digit drop in every metric, confirming that both elements are indispensable for robust late-blight forecasting.

4.6.2. Fine-Tuning Strategy Ablation

We further compared three common adaptation strategies—Linear Probing (LP), Full Fine-Tuning (FT), and a two-stage LP → FT schedule (LP-FT). The results in Table 9 show that LP converges fastest but under-fits, FT achieves stronger adaptation at the cost of training time, and the hybrid LP-FT approach inherits both rapid convergence and high final accuracy. Empirically, LP-FT improves F1-score by + 1.6 2.0 pp over pure FT and by more than + 5 pp over LP alone, making it the preferred recipe for time-sensitive yet precision-critical agricultural disease forecasting.

4.7. Robustness to Missing and Noisy Inputs

Both sets of experiments underline that (1) RevIN and the temporal encoding layer are critical for coping with distributional drift and capturing multi-scale temporal cues; (2) the two-stage LP-FT schedule yields the best trade-off between training efficiency and predictive performance. Consequently, our final PLB-GPT configuration retains all architectural components and adopts the LP-FT fine-tuning pipeline for subsequent experiments.
To assess the robustness of PLB-GPT under realistic field conditions, we conducted controlled perturbation experiments that simulated missing and noisy meteorological inputs, which commonly arise from sensor malfunction, communication interruption, or measurement noise. Missingness was introduced by randomly masking a proportion of input values across time steps or channels and applying the same imputation strategy used during preprocessing, while noise was simulated via additive Gaussian perturbations scaled by each feature’s standard deviation.
Table 10 reports performance across 1-day, 3-day, 5-day, and 7-day forecasting horizons under varying perturbation levels. Across all horizons, model performance degrades smoothly and monotonically as perturbation severity increases, without abrupt collapse, indicating stable behavior under input uncertainty. Among the evaluated settings, the 5-day horizon exhibits the strongest robustness, maintaining F1-scores above 0.81 even under 20% missingness or a noise level of α = 0.20 , consistent with its favorable trade-off between temporal context and forecasting uncertainty. Metric-wise, Precision remains relatively stable under moderate perturbations, whereas Recall shows a larger but gradual decline, reflecting the higher sensitivity of outbreak detection to incomplete or noisy inputs. Overall, these results demonstrate that PLB-GPT degrades gracefully rather than catastrophically when confronted with imperfect data, supporting its practical reliability in real-world agricultural disease monitoring systems.

4.8. Practical Implications

Based on agronomic guidelines and local disease management protocols, we propose a risk-based action framework: when the predicted late blight probability exceeds 70%, farmers are advised to apply preventive fungicides within 48 h; if the probability surpasses 90%, immediate field inspection and curative treatment should be initiated. This mapping ensures that model outputs directly translate into timely, cost-effective interventions.

5. Discussion

5.1. Limitations and Training Characteristics

Despite the encouraging performance of PLB-GPT, several limitations and practical considerations warrant discussion. First, predictive accuracy depends on the quality and completeness of meteorological time-series inputs. In real-world agricultural monitoring systems, sensor malfunction, communication interruption, or irregular data collection may introduce missing or noisy observations, potentially degrading disease risk estimation. Second, the computational cost of training large pretrained models, in particular during full-fine tuning, remains non-trivial and may limit accessibility for institutions with constrained resources. This overhead, however, is largely confined to the training phase; once trained, PLB-GPT can be deployed efficiently via centralized server-side infrastructure without heavy local computation.
From an optimization perspective, the proposed two-stage fine-tuning strategy (LP-FT) exhibits favorable convergence behavior. Linear probing provides a stable initialization by adapting only the prediction head, reducing abrupt gradient updates in early training. Subsequent full fine-tuning enables the gradual adaptation of deeper layers, leading to smoother loss trajectories and reduced oscillation compared with direct end-to-end fine-tuning.

5.2. Deployment and Efficiency Considerations

PLB-GPT is designed for deployment within a centralized server-side architecture rather than on edge devices such as mobile phones, UAVs, or static cameras. In this setting, the model is maintained by agricultural agencies or technical service providers, while end users interact through lightweight web or mobile interfaces. This design allows farmers and agricultural technicians to access disease risk predictions, visualize temporal trends, and generate reports without requiring specialized hardware, as illustrated in Figure 13.
Improving computational efficiency remains an important direction for large-scale and long-term operation. Potential strategies include model distillation to reduce inference cost, parameter-efficient fine-tuning methods such as adapters or low-rank adaptation to lower training overhead, and lightweight Transformer variants tailored to meteorological time-series data. Together, these approaches offer practical pathways to balance predictive performance and deployment efficiency.

5.3. Robustness, Generalization, and Future Outlook

Beyond robustness to missing or noisy inputs, resilience under extreme weather conditions is an increasingly important challenge in the context of climate change. Future work may adopt systematic stress-testing protocols using synthetic or out-of-distribution climate scenarios, such as prolonged high-humidity periods, extreme temperature fluctuations, or sustained low-pressure systems. Evaluating model behavior under such rare but high-impact conditions would provide deeper insight into the reliability and limitations of PLB-GPT for long-term agricultural decision support.
Finally, the current study focuses on computational validation and does not include direct field trials with farmers or agronomists. Future collaboration with agricultural extension agencies will be essential to assess real-world utility, refine system design based on user feedback, and evaluate the practical implications of prediction errors, particularly false negatives, which often incur higher costs in disease management. Complementary interpretability analyses, such as examining attention patterns or feature importance, may further enhance transparency and foster trust in real-world deployment.

6. Conclusions and Future Work

This paper introduces a novel generative pre-trained transformer model tailored to the prediction of late blight outbreaks in potato crops. Through a detailed evaluation, the model was found to consistently outperform several baseline models, including CARAH, ARIMA, LSTM, and Informer, across various time windows: 1-day, 3-day, 5-day, and 7-day.
The comprehensive comparison using accuracy, precision, recall, and F1-score demonstrates that PLB-GPT offers a superior performance, particularly in medium-term windows, such as the 5-day prediction horizon. For example, PLB-GPT achieved the highest accuracy and precision of 0.8746 and 0.8915, respectively, in the 5-day window, surpassing the baseline models, which exhibited greater variability and poorer performance in handling time-series data related to agricultural disease prediction.
Beyond technical improvements, PLB-GPT has the potential to bring broader benefits to agricultural sustainability. Timely and accurate predictions of late blight can help farmers reduce unnecessary pesticide use, minimize yield loss, and lower financial risk. This contributes not only to improved livelihoods but also to environmental protection and regional food security.
In the future, we will explore our future work along three complementary directions, distinguishing near-term methodological refinements from mid-term and long-term system-level goals. In the short term, we will enhance model interpretability through feature importance and attention analysis to improve trust and usability for agronomists, and refine alignment and normalization modules to better handle regional and seasonal distribution shifts; in the medium term, we plan to incorporate additional data modalities—such as soil properties, satellite imagery, and crop phenological indicators—to enrich contextual understanding and boost predictive accuracy; in the long term, we aim to transition PLB-GPT from a research prototype to a deployable decision-support system, integrated with on-farm IoT weather stations, mobile alert platforms, and extension services for real-time, field-scale late blight management across diverse agroecological settings.

Author Contributions

Conceptualization, P.Y.; methodology, P.Y. and Y.X.; software, Y.X.; validation, Y.X.; formal analysis, P.Y.; investigation, P.Y. and Y.X.; resources, P.Y. and C.H.; data curation, Y.X., D.L. and Y.T.; writing—original draft preparation, Y.X.; writing—review and editing, P.Y., Y.X., M.D., D.L. and S.D.; visualization, Y.X.; supervision, P.Y. and S.D.; project administration, P.Y.; funding acquisition, M.D. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Basic Scientific Research Operating Fund of Nanjing Agricultural University (No.QTPY2025007), Jiangsu Province Postgraduate Practice and Innovation Program (No.SJCX24-0221).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bhattarai, P. Potatoes for prosperity: Enhancing food security, livelihoods, and climate resilience in Nepal. Arch. Agric. Environ. Sci. 2025, 10, 540–548. [Google Scholar] [CrossRef]
  2. Pacilly, F.; Groot, J.; Hofstede, G.; Schaap, B.; Bueren, E.V. Analysing potato late blight control as a social-ecological system using fuzzy cognitive mapping. Agron. Sustain. Dev. 2016, 36, 35. [Google Scholar] [CrossRef]
  3. Martini, F.; Jijakli, M.H.; Gontier, E.; Muchembled, J.; Fauconnier, M. Harnessing Plant’s Arsenal: Essential Oils as Promising Tools for Sustainable Management of Potato Late Blight Disease Caused by Phytophthora infestans—A Comprehensive Review. Molecules 2023, 28, 7302. [Google Scholar] [CrossRef] [PubMed]
  4. Leesutthiphonchai, W.; Vu, A.; Ah-Fong, A.M.V.; Judelson, H. How Does Phytophthora infestans Evade Control Efforts? Modern Insight Into the Late Blight Disease. Phytopathology 2018, 108, 916–924. [Google Scholar] [CrossRef]
  5. Peerzada, S.; Viswanath, H.; Bhat, K. Integrated disease management (IDM) strategies to check the late blight (Phytophthora infestans) disease apart from boosting the yield of potato crop. J. Pharmacogn. Phytochem. 2020, 9, 219–225. [Google Scholar]
  6. Manjunath, B.; Manjula, C.; Basha, J.; Srinivasappa, K.; Gowda, M. Assessment on Management of Late Blight in Potato Incited by Phytophthora infestans. Int. J. Plant Prot. 2017, 10, 410–414. [Google Scholar] [CrossRef]
  7. van Dijk, M.; Meijerink, G. A review of global food security scenario and assessment studies: Results, gaps and research priorities. Glob. Food Secur. 2014, 3, 227–238. [Google Scholar] [CrossRef]
  8. Sharma, U.; Sharma, S. Forecast and Need Based Fungicide Application for Effective Management of Late Blight of Potato. J. Krishi Vigyan 2018, 6, 130–133. [Google Scholar] [CrossRef]
  9. Dzedaev, H.T.; Gazdanova, I.; Bekmurzov, B. Biological control of Phytophthora infestans in potatoes. Agrar. Bull. Ural. 2023, 23, 2–10. [Google Scholar] [CrossRef]
  10. Zhang, J.; Huang, X.; Yang, S.; Huang, A.; Ren, J.; Luo, X.; Feng, S.; Li, P.; Dong, P. Endophytic Bacillus subtilis H17-16 effectively inhibits Phytophthora infestans, the pathogen of potato late blight, and its potential application. Pest Manag Sci. 2023, 79, 5073–5086. [Google Scholar] [CrossRef]
  11. Möller, K.; Reents, H.J. Impact of agronomic strategies (seed tuber pre-sprouting, cultivar choice) to control late blight (Phytophthora infestans) on tuber growth and yield in organic potato (Solanum tuberosum L.) crops. Potato Res. 2007, 50, 15–29. [Google Scholar] [CrossRef]
  12. Hussain, S. Advancing plant health management: Challenges, strategies, and implications for global agriculture. Int. J. Agric. Sustain. Dev. 2024, 6, 73–89. [Google Scholar]
  13. Sharma, P.; Singh, B.; Singh, R. Prediction of Potato Late Blight Disease Based Upon Weather Parameters Using Artificial Neural Network Approach. In Proceedings of the 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Bengaluru, India, 10–12 July 2018; pp. 1–13. [Google Scholar]
  14. Pacilly, F.C.; Hofstede, G.J.; Lammerts van Bueren, E.T.; Groot, J.C. Analysing social-ecological interactions in disease control: An agent-based model on farmers’ decision making and potato late blight dynamics. Environ. Model. Softw. 2019, 119, 354–373. [Google Scholar] [CrossRef]
  15. Newlands, N.K. Model-Based Forecasting of Agricultural Crop Disease Risk at the Regional Scale, Integrating Airborne Inoculum, Environmental, and Satellite-Based Monitoring Data. Front. Environ. Sci. 2018, 6, 63. [Google Scholar] [CrossRef]
  16. Siami-Namini, S.; Tavakoli, N.; Siami Namin, A. A Comparison of ARIMA and LSTM in Forecasting Time Series. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1394–1401. [Google Scholar]
  17. Wang, C.C.; Chien, C.H.; Trappey, A.J. On the application of ARIMA and LSTM to predict order demand based on short lead time and on-time delivery requirements. Processes 2021, 9, 1157. [Google Scholar] [CrossRef]
  18. Li, J.; Zhang, X.; Hu, Q.; Zhang, F.; Gaida, O.; Chen, L. Data Augmentation Technique Based on Improved Time-Series Generative Adversarial Networks for Power Load Forecasting in Recirculating Aquaculture Systems. Sustainability 2024, 16, 10721. [Google Scholar] [CrossRef]
  19. Yewle, A.D.; Mirzayeva, L.; Karakuş, O. Multi-modal data fusion and deep ensemble learning for accurate crop yield prediction. Remote Sens. Appl. Soc. Environ. 2025, 38, 101613. [Google Scholar] [CrossRef]
  20. Hao, B.; Hu, Y.; Adams, W.G.; Assoumou, S.A.; Hsu, H.E.; Bhadelia, N.; Paschalidis, I.C. A GPT-based EHR modeling system for unsupervised novel disease detection. J. Biomed. Inform. 2024, 157, 104706. [Google Scholar] [CrossRef]
  21. Kuska, M.T.; Wahabzada, M.; Paulus, S. AI for crop production–Where can large language models (LLMs) provide substantial value? Comput. Electron. Agric. 2024, 221, 108924. [Google Scholar] [CrossRef]
  22. Guo, W.; Yang, Y.; Wu, H.; Zhu, H.; Miao, Y.; Gu, J. Big models in agriculture: Key technologies, application and future directions. Smart Agric. 2024, 6, 1–13. [Google Scholar]
  23. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020; NIPS ’20. Curran Associates Inc.: Red Hook, NY, USA, 2020. [Google Scholar]
  24. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 6000–6010. [Google Scholar]
  25. Xie, W.; Zhao, M.; Liu, Y.; Yang, D.; Huang, K.; Fan, C.; Wang, Z. Recent advances in Transformer technology for agriculture: A comprehensive survey. Eng. Appl. Artif. Intell. 2024, 138, 109412. [Google Scholar] [CrossRef]
  26. Ahmad, N.; Khan, M.; Khan, N.; Ali, M. Prediction of Potato Late Blight Disease Based upon Environmental Factors in Faisalabad, Pakistan. J. Plant Pathol. Microbiol. 2015, 8, 3. [Google Scholar] [CrossRef]
  27. Biswas, S.; Jagyasi, B.; Singh, B.; Lal, M. Severity identification of Potato Late Blight disease from crop images captured under uncontrolled environment. In Proceedings of the 2014 IEEE Canada International Humanitarian Technology Conference—(IHTC), Montreal, QC, Canada, 1–4 June 2014; pp. 1–5. [Google Scholar]
  28. Wu, Q.; Yang, Y.-y.; Andom, O.; Li, Y.-l.; Luo, Z.-z.; Guo, A.-h. Effectiveness of potato late blight (Phytophthora infestans) forecast by meteorological estimation in mountainous terrain based on CARAH rules. Fungal Biol. 2023, 127, 1475–1483. [Google Scholar] [CrossRef] [PubMed]
  29. Fenu, G.; Malloci, F. Artificial Intelligence Technique in Crop Disease Forecasting: A Case Study on Potato Late Blight Prediction. In Intelligent Decision Technologies; Springer: Singapore; Volume 2020, pp. 79–89.
  30. Gu, Y.H.; Yoo, S.J.; Park, C.; Kim, Y.H.; Park, S.; Kim, J.S.; Lim, J. BLITE-SVR: New forecasting model for late blight on potato using support-vector regression. Comput. Electron. Agric. 2016, 130, 169–176. [Google Scholar] [CrossRef]
  31. Duarte-Carvajalino, J.; Alzate, D.F.; Ramirez, A.s.A.; Santa-Sepulveda, J.D.; Fajardo-Rojas, A.E.; Soto-Suárez, M. Evaluating Late Blight Severity in Potato Crops Using Unmanned Aerial Vehicles and Machine Learning Algorithms. Remote. Sens. 2018, 10, 1513. [Google Scholar] [CrossRef]
  32. Singh, B.P.; Govindakrishnan, P.M.; Ahmad, I.; Rawat, S.; Sharma, S.; Sreekumar, J. INDO-BLIGHTCAST C a model for forecasting late blight across agroecologies. Int. J. Pest Manag. 2016, 62, 360–367. [Google Scholar] [CrossRef]
  33. Jin, M.; Wang, S.; Ma, L.; Chu, Z.; Zhang, J.Y.; Shi, X.; Chen, P.Y.; Liang, Y.; Li, Y.F.; Pan, S.; et al. Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. arXiv 2023, arXiv:2310.01728. [Google Scholar]
  34. Zhao, B.; Jin, W.; Del Ser, J.; Yang, G. ChatAgri: Exploring potentials of ChatGPT on cross-linguistic agricultural text classification. Neurocomputing 2023, 557, 126708. [Google Scholar]
  35. Gruver, N.; Finzi, M.; Qiu, S.; Wilson, A.G. Large language models are zero-shot time series forecasters. In Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; NIPS ’23. Neural Information Processing Systems Foundation, Inc.: Red Hook, NY, USA, 2023. [Google Scholar]
  36. Bhattacharyya, P.; Huang, C.; Czarnecki, K. SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous Driving. In Proceedings of the 6th Conference on Robot Learning, Auckland, New Zealand, 14–18 December 2022; pp. 1793–1805. [Google Scholar]
  37. Chen, Z.; E, J.; Zhang, X.; Sheng, H.; Cheng, X. Multi-Task Time Series Forecasting with Shared Attention. In Proceedings of the 2020 International Conference on Data Mining Workshops (ICDMW), Virtual, 17–20 November 2020; pp. 917–925. [Google Scholar]
  38. Agarwal, K.; Dheekollu, L.; Dhama, G.; Arora, A.; Asthana, S.; Bhowmik, T. Deep Learning based Time Series Forecasting. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020; pp. 859–864. [Google Scholar]
  39. Chan, J.W.; Yeo, C.K. Electrical Power Consumption Forecasting with Transformers. In Proceedings of the 2022 IEEE Electrical Power and Energy Conference (EPEC), Virtual, 5–7 December 2022; pp. 255–260. [Google Scholar]
  40. Baidya, R.; Jeong, H. Anomaly Detection in Time Series Data Using Reversible Instance Normalized Anomaly Transformer. Sensors 2023, 23, 9272. [Google Scholar] [CrossRef]
  41. Vaidheki, M.; Gupta, D.S.; Basak, P.; Debnath, M.K.; Hembram, S.; Ajith, S. Prediction of potato late blight disease incidence based on weather variables using statistical and machine learning models: A case study from West Bengal. J. Agrometeorol. 2023, 25, 583–588. [Google Scholar] [CrossRef]
  42. Providence, A.M.; Yang, C.; Orphe, T.B.; Mabaire, A.M.; Agordzo, G.K. Spatial and Temporal Normalization for Multi-Variate Time Series Prediction Using Machine Learning Algorithms. Electronics 2022, 11, 3167. [Google Scholar] [CrossRef]
  43. Wang, Y.; Li, S.; Cheng, Z.; Sun, Z.; Jiang, C.; Wan, Y. A Transformer Model Incorporating Dynamic Chunking Strategy for Multivariate Time Series Classification. In Proceedings of the 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 30 June–5 July 2024; pp. 1–8. [Google Scholar]
  44. Xie, J.; Cheng, P.; Liang, X.; Dai, Y.; Du, N. Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, 11–16 August 2024; pp. 13500–13519. [Google Scholar]
  45. Wu, H.; Yang, R.; Qing, H.; Liu, K.; Jiang, Z.; Li, Y.; Zhang, H. TSFN: An Effective Time Series Anomaly Detection Approach via Transformer-based Self-feedback Network. In Proceedings of the 2023 International Conference on Computer Supported Cooperative Work in Design (CSCWD), Rio de Janeiro, Brazil, 24–26 May 2023; pp. 1396–1401. [Google Scholar]
  46. Wang, K.; Li, K.; Zhou, L.; Hu, Y.; Cheng, Z.; Liu, J.; Chen, C. Multiple convolutional neural networks for multivariate time series prediction. Neurocomputing 2019, 360, 107–119. [Google Scholar] [CrossRef]
  47. Suarez Baron, M.J.; Gomez, A.L.; Diaz, J.E.E. Supervised Learning-Based Image Classification for the Detection of Late Blight in Potato Crops. Appl. Sci. 2022, 12, 9371. [Google Scholar] [CrossRef]
  48. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; Gelly, S. Parameter-efficient transfer learning for NLP. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2790–2799. [Google Scholar]
Figure 1. Example illustration and infection cycle of potato late blight. (a) Example of potato late blight. (b) The infection cycle of potato late blight.
Figure 1. Example illustration and infection cycle of potato late blight. (a) Example of potato late blight. (b) The infection cycle of potato late blight.
Mathematics 14 00225 g001
Figure 2. The processing architecture of PLB-GPT model for potato late blight prediction.
Figure 2. The processing architecture of PLB-GPT model for potato late blight prediction.
Mathematics 14 00225 g002
Figure 3. The structure of the generative pre-trained model for potato late blight forecasting with meteorological time series data. (a) Time-Series Alignment stage. (b) Model Fine-Tuning stage.
Figure 3. The structure of the generative pre-trained model for potato late blight forecasting with meteorological time series data. (a) Time-Series Alignment stage. (b) Model Fine-Tuning stage.
Mathematics 14 00225 g003
Figure 4. Two-level aggregation encoding for meteorological time series data of potato.
Figure 4. Two-level aggregation encoding for meteorological time series data of potato.
Mathematics 14 00225 g004
Figure 5. Geographical distribution of meteorological and potato late blight disease monitoring stations across 6 Chinese provinces.
Figure 5. Geographical distribution of meteorological and potato late blight disease monitoring stations across 6 Chinese provinces.
Mathematics 14 00225 g005
Figure 6. In-field meteorological monitoring setup using QD-3340MV wireless station, photographed in Wuxi, Chongqing.
Figure 6. In-field meteorological monitoring setup using QD-3340MV wireless station, photographed in Wuxi, Chongqing.
Mathematics 14 00225 g006
Figure 7. Distribution of normalized meteorological features: (a) temperature distribution; (b) humidity distribution; (c) max. temperature distribution; (d) min. temperature distribution.
Figure 7. Distribution of normalized meteorological features: (a) temperature distribution; (b) humidity distribution; (c) max. temperature distribution; (d) min. temperature distribution.
Mathematics 14 00225 g007
Figure 8. Feature distribution over different time windows: (a) 1-day time window; (b) 3-day time window; (c) 5-day time window; (d) 7-day time window.
Figure 8. Feature distribution over different time windows: (a) 1-day time window; (b) 3-day time window; (c) 5-day time window; (d) 7-day time window.
Mathematics 14 00225 g008
Figure 9. Training and validation loss changes during linear probing.
Figure 9. Training and validation loss changes during linear probing.
Mathematics 14 00225 g009
Figure 10. Training and validation loss changes during full fine-tuning.
Figure 10. Training and validation loss changes during full fine-tuning.
Mathematics 14 00225 g010
Figure 11. Training and validation loss changes during two-stage fine-tuning.
Figure 11. Training and validation loss changes during two-stage fine-tuning.
Mathematics 14 00225 g011
Figure 12. Comparison of training and validation loss across different fine-tuning strategies.
Figure 12. Comparison of training and validation loss across different fine-tuning strategies.
Mathematics 14 00225 g012
Figure 13. Web interface of PLB-GPT. The interface allows users to select monitoring stations, display current weather conditions, predict disease risk, compare results with historical data, and generate reports.
Figure 13. Web interface of PLB-GPT. The interface allows users to select monitoring stations, display current weather conditions, predict disease risk, compare results with historical data, and generate reports.
Mathematics 14 00225 g013
Table 1. Comparison of representative late blight prediction methods.
Table 1. Comparison of representative late blight prediction methods.
MethodInput TypeModeling ApproachGeneralization AbilityGenerative Model
Fenu et al. [29]Meteorological dataSVM (shallow ML)Limited (region-specific)No
BLITE-SVR [30]Meteorological + soil/plantSVR (nonlinear regression)ModerateNo
CARAH model [28]Real-time weather dataRule-based + weather thresholdsModerateNo
INDO-BLIGHTCAST [32]Regional agro-ecological featuresStatistical + expert rulesLowNo
Time-LLM [33]Meteorological time seriesGPT-based forecastingHigh (time-series tasks)Yes
ChatAgri [34]Generic time-series inputsZero/few-shot TransformerHighYes
PLB-GPT (Ours)Meteorological + historical disease dataGPT-style pretrained + fine-tuned TransformerHigh (multi-region generalization)Yes (two-stage generative)
Table 2. Sample data for meteorological monitoring of potato late blight used in this paper.
Table 2. Sample data for meteorological monitoring of potato late blight used in this paper.
DateTempHumMax TMin TDew PtPresPrecWind SpdDirDisease
11 June 2020 00:0016201616−7.08300.09.7SWNo
11 June 2020 07:0016281616−2.58290.022.6SSWNo
11 June 2020 12:00154615153.58290.023.0SSENo
11 June 2020 19:00174717175.68280.018.3SSWNo
15 July 2020 14:002491252322.09968.45.1SEYes
Abbreviation explanations: Temp: temperature; Hum: humidity; Max T: maximum temperature; Min T: minimum temperature; Dew Pt: dew point; Pres: pressure; Prec: precipitation; Wind Spd: wind speed; Dir: wind direction.
Table 3. Model training parameters for linear probing.
Table 3. Model training parameters for linear probing.
ParameterValueDescription
Layer FreezingTrueAll pretrained layers are frozen; only the classifier/output layer is updated.
OptimizerAdamAdaptive learning-rate optimizer.
Learning Rate 1 × 10 4 Slightly higher LR accelerates convergence while preserving learned features.
Batch Size32Small batch size ensures stable gradients.
Epochs15Few epochs suffice since only the output layer is being tuned.
Early StoppingPatience = 2Training halts if validation performance fails to improve for two consecutive epochs.
Table 4. Model training parameters for full fine-tuning.
Table 4. Model training parameters for full fine-tuning.
ParameterValueDescription
Layer FreezingFalseAll pretrained layers remain unfrozen, enabling full-model fine-tuning.
OptimizerAdamAdaptive learning-rate optimizer.
Learning Rate 1 × 10 5 Low learning rate helps preserve previously learned representations.
Batch Size32Small batch size maintains training stability.
Epochs50More epochs allow thorough adjustment of model parameters.
Early StoppingPatience = 5Training halts if validation performance fails to improve for five consecutive epochs.
Table 5. Model training parameters for two-stage fine-tuning.
Table 5. Model training parameters for two-stage fine-tuning.
ParameterValueDescription
Layer FreezingFalseAll pretrained layers remain unfrozen during the full fine-tuning stage.
OptimizerAdamAdaptive learning-rate optimizer.
Learning Rate 1 × 10 5 Low learning rate prevents disruption of learned feature representations.
Batch Size32Small batch size ensures stable training.
Epochs30Fewer epochs are required because the model is pre-adapted in the linear-probing phase.
Early StoppingPatience = 5Training stops if validation performance does not improve for five consecutive epochs.
Table 6. Performance evaluation of potato late blight prediction models across multiple time windows.
Table 6. Performance evaluation of potato late blight prediction models across multiple time windows.
ModelAccuracyPrecision
1-Day3-Day5-Day7-Day1-Day3-Day5-Day7-Day
PLB-GPT0.83020.83530.87460.83710.83240.85320.89150.8537
CARAH0.83240.83240.83240.83240.84830.83240.83240.8324
ARIMA0.68330.68330.68330.68330.68500.68500.68500.6850
LSTM0.76320.71050.74360.73280.77650.72530.75940.7478
Informer0.75470.76540.79410.82360.76210.77320.80250.8314
RecallF1-Score
1-Day3-Day5-Day7-Day1-Day3-Day5-Day7-Day
PLB-GPT0.81230.81740.85640.81800.77320.81230.84720.8324
CARAH0.83240.83240.83240.83240.78850.78850.78850.7885
ARIMA0.68050.68050.68050.68050.66710.66710.66710.6671
LSTM0.74830.69620.72610.71590.74830.69620.72610.7159
Informer0.73560.74680.77320.80240.73560.74680.77320.8024
Table 7. Paired t-test results comparing PLB-GPT with baseline models across multiple metrics.
Table 7. Paired t-test results comparing PLB-GPT with baseline models across multiple metrics.
ModelAccuracyPrecisionRecallF1-Score
t p t p t p t p
CARAH1.0260.3771.5430.2181.0590.3652.4780.090
ARIMA21.8530.00122.3460.00021.5930.00023.1430.000
LSTM5.3790.0126.4810.0075.6240.0105.9380.010
Informer4.2190.0234.6030.0194.4120.0214.6750.018
Table 8. Component ablation results.
Table 8. Component ablation results.
Model VariantAccuracyPrecisionRecallF1-Score
Full model (RevIN + TE)0.83020.84830.81230.8300
w/o RevIN0.69900.71240.70330.7078
w/o Temporal Encoding0.66050.67510.64200.6582
Table 9. Fine-tuning strategy ablation.
Table 9. Fine-tuning strategy ablation.
StrategyAccuracyPrecisionRecallF1-Score
Linear Probing (LP)0.78210.79050.77120.7804
Full Fine-Tuning (FT)0.81840.83270.80390.8173
LP → FT (LP-FT)0.83870.85560.82310.8339
Table 10. Robustness evaluation under controlled missingness and noise across different forecasting horizons.
Table 10. Robustness evaluation under controlled missingness and noise across different forecasting horizons.
HorizonMetricCleanM5M10M20N0.05N0.10N0.20
1-dayAccuracy0.83020.82450.81470.79210.82630.81580.7946
Precision0.83240.82680.81760.79540.82890.81830.7971
Recall0.81230.80490.79180.76850.80720.79360.7704
F1-score0.77320.76610.75430.73690.76900.75590.7392
3-dayAccuracy0.83530.82970.82010.79840.83150.82120.8006
Precision0.85320.84760.83780.81420.84930.83910.8167
Recall0.81740.81020.79890.77230.81280.80070.7748
F1-score0.81230.80570.79460.77690.80790.79650.7784
5-dayAccuracy0.87460.86820.85740.83410.86930.85870.8369
Precision0.89150.88590.87560.85120.88740.87680.8531
Recall0.85640.84810.83390.80680.85020.83550.8087
F1-score0.84720.84060.82700.81830.84200.82980.8204
7-dayAccuracy0.83710.83100.82160.80020.83270.82210.8014
Precision0.85370.84750.83720.81390.84910.83860.8162
Recall0.81800.81060.79880.77410.81240.79970.7763
F1-score0.83240.82510.81390.79230.82670.81460.7958
M5/M10/M20 denote missing rates of 5%, 10%, and 20%, respectively. N0.05/N0.10/N0.20 denote Gaussian noise levels with σ = α σ feature .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, P.; Xia, Y.; Dong, M.; He, C.; Liu, D.; Tan, Y.; Dong, S. PLB-GPT: Potato Late Blight Prediction with Generative Pretrained Transformer and Optimizing. Mathematics 2026, 14, 225. https://doi.org/10.3390/math14020225

AMA Style

Yuan P, Xia Y, Dong M, He C, Liu D, Tan Y, Dong S. PLB-GPT: Potato Late Blight Prediction with Generative Pretrained Transformer and Optimizing. Mathematics. 2026; 14(2):225. https://doi.org/10.3390/math14020225

Chicago/Turabian Style

Yuan, Peisen, Ye Xia, Mengjian Dong, Cheng He, Dingfei Liu, Yixi Tan, and Suomeng Dong. 2026. "PLB-GPT: Potato Late Blight Prediction with Generative Pretrained Transformer and Optimizing" Mathematics 14, no. 2: 225. https://doi.org/10.3390/math14020225

APA Style

Yuan, P., Xia, Y., Dong, M., He, C., Liu, D., Tan, Y., & Dong, S. (2026). PLB-GPT: Potato Late Blight Prediction with Generative Pretrained Transformer and Optimizing. Mathematics, 14(2), 225. https://doi.org/10.3390/math14020225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop