IPM-AgriGPT: A Large Language Model for Pest and Disease Management with a G-EA Framework and Agricultural Contextual Reasoning
Abstract
:1. Introduction
- Proposing a Generation-Evaluation Adversarial (G-EA) framework to improve data processing efficiency and ensure the quality of the question-answering corpus.
- Applying the Agricultural Contextual Reasoning Chain-of-Thought Distillation (ACR-CoTD), which transfers the reasoning process from the teacher model to the student model, enhancing the student model’s reasoning ability and overall performance in complex tasks.
- Using low-rank adaptation (LoRA) for supervised fine-tuning to improve the base model’s understanding of agricultural pest and disease knowledge.
- Constructing an evaluation benchmark for the agricultural pest and disease domain to comprehensively assess the capabilities of LLMs in this field.
2. Related Work
3. Materials and Methods
3.1. Dataset Collection
3.1.1. Web
3.1.2. Database
3.1.3. Text
3.2. Dataset Preprocessing
3.2.1. Corpus Construction
3.2.2. Generation-Evaluation Adversarial Framework
3.3. Proposed Method
3.3.1. Agricultural Contextual Reasoning CoTD
3.3.2. SFT Based on Pest and Disease Data
3.3.3. LoRA
3.4. Experimental Setup
3.4.1. Hardware Platform
3.4.2. Base Model
3.4.3. Pre-Training
3.4.4. Fine-Tuning
3.5. Evaluation Metrics
3.5.1. Baseline
3.5.2. Objective Task
3.5.3. Subjective Task
4. Results
4.1. Evaluation Results
4.2. Ablation Experiment Results
5. Discussion
5.1. Results Analysis
5.2. Limitation
5.3. Future Work
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, H.; Wu, H.; Zhu, H.; Miao, Y.; Wang, Q.; Qiao, S.; Zhao, H.; Chen, C.; Zhang, J. A residual LSTM and Seq2Seq neural network based on GPT for Chinese rice-related question and answer system. Agriculture 2022, 12, 813. [Google Scholar] [CrossRef]
- Zhao, X.; Chen, B.; Ji, M.; Wang, X.; Yan, Y.; Zhang, J.; Liu, S.; Ye, M.; Lv, C. Implementation of Large Language Models and Agricultural Knowledge Graphs for Efficient Plant Disease Detection. Agriculture 2024, 14, 1359. [Google Scholar] [CrossRef]
- Gao, R.; Dong, Z.; Wang, Y.; Cui, Z.; Ye, M.; Dong, B.; Lu, Y.; Wang, X.; Song, Y.; Yan, S. Intelligent cotton Pest and disease detection: Edge computing solutions with transformer technology and knowledge graphs. Agriculture 2024, 14, 247. [Google Scholar] [CrossRef]
- Xie, B.; Su, Q.; Tang, B.; Li, Y.; Yang, Z.; Wang, J.; Wang, C.; Lin, J.; Li, L. Combining Neural Architecture Search with Knowledge Graphs in Transformer: Advancing Chili Disease Detection. Agriculture 2023, 13, 2025. [Google Scholar] [CrossRef]
- Tang, W.; Wen, X.; Hu, Z. Named Entity Recognition for Crop Diseases and Pests Based on Gated Fusion Unit and Manhattan Attention. Agriculture 2024, 14, 1565. [Google Scholar] [CrossRef]
- Pan, S.; Luo, L.; Wang, Y.; Chen, C.; Wang, J.; Wu, X. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv 2023, arXiv:2306.08302. [Google Scholar] [CrossRef]
- Brown, T.B. Language models are few-shot learners. arXiv 2020, arXiv:2005.14165. [Google Scholar]
- Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
- Le Scao, T.; Fan, A.; Akiki, C.; Pavlick, E.; Ilić, S.; Hesslow, D.; Castagné, R.; Luccioni, A.S.; Yvon, F.; Gallé, M. Bloom: A 176b-parameter open-access multilingual language model. arXiv 2023, arXiv:2211.05100. [Google Scholar]
- Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D. Emergent abilities of large language models. arXiv 2022, arXiv:2206.07682. [Google Scholar]
- Hendy, A.; Abdelrehim, M.; Sharaf, A.; Raunak, V.; Gabr, M.; Matsushita, H.; Kim, Y.J.; Afify, M.; Awadalla, H.H. How good are gpt models at machine translation? a comprehensive evaluation. arXiv 2023, arXiv:2302.09210. [Google Scholar]
- Park, C.; Koo, S.; Kim, G.; Lim, H. Towards Harnessing the Most of ChatGPT for Korean Grammatical Error Correction. Appl. Sci. 2024, 14, 3195. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Z.; Wang, R. Element-aware summarization with large language models: Expert-aligned evaluation and chain-of-thought method. arXiv 2023, arXiv:2305.13412. [Google Scholar]
- Snæbjarnarson, V.; Símonarson, H.B.; Ragnarsson, P.O.; Ingólfsdóttir, S.L.; Jónsson, H.P.; Þorsteinsson, V.; Einarsson, H. A Warm Start and a Clean Crawled Corpus—A Recipe for Good Language Models. arXiv 2022, arXiv:2201.05601. [Google Scholar]
- BT, B.; Chen, J.-M. Performance Assessment of ChatGPT versus Bard in Detecting Alzheimer’s Dementia. Diagnostics 2024, 14, 817. [Google Scholar] [CrossRef] [PubMed]
- Guo, T.; Nan, B.; Liang, Z.; Guo, Z.; Chawla, N.; Wiest, O.; Zhang, X. What can large language models do in chemistry? a comprehensive benchmark on eight tasks. Adv. Neural Inf. Process. Syst. 2023, 36, 59662–59688. [Google Scholar]
- Plevris, V.; Papazafeiropoulos, G.; Jiménez Rios, A. Chatbots put to the test in math and logic problems: A comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google bard. AI 2023, 4, 949–969. [Google Scholar] [CrossRef]
- Shen, J.; Tenenholtz, N.; Hall, J.B.; Alvarez-Melis, D.; Fusi, N. Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains. arXiv 2024, arXiv:2402.05140. [Google Scholar]
- Shutske, J.M. Harnessing the Power of Large Language Models in Agricultural Safety & Health. J. Agric. Saf. Health 2023, 29, 205–224. [Google Scholar]
- Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; Gelly, S. Parameter-efficient transfer learning for NLP. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2790–2799. [Google Scholar]
- Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. Lora: Low-rank adaptation of large language models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
- Zhang, Q.; Chen, M.; Bukharin, A.; Karampatziakis, N.; He, P.; Cheng, Y.; Chen, W.; Zhao, T. AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv 2023, arXiv:2303.10512. [Google Scholar]
- Dettmers, T.; Pagnoni, A.; Holtzman, A.; Zettlemoyer, L. QLoRA: Efficient finetuning of quantized LLMs. arXiv 2023, arXiv:2305.14314. [Google Scholar]
- Zhang, Y.; Wang, Z.; He, Z.; Li, J.; Mai, G.; Lin, J.; Wei, C.; Yu, W. BB-GeoGPT: A framework for learning a large language model for geographic information science. Inf. Process. Manag. 2024, 61, 103808. [Google Scholar] [CrossRef]
- Wang, H.; Liu, C.; Xi, N.; Qiang, Z.; Zhao, S.; Qin, B.; Liu, T. Huatuo: Tuning llama model with chinese medical knowledge. arXiv 2023, arXiv:2304.06975. [Google Scholar]
- Yang, H.; Li, S.; Gonçalves, T. Enhancing Biomedical Question Answering with Large Language Models. Information 2024, 15, 494. [Google Scholar] [CrossRef]
- Huang, Q.; Tao, M.; Zhang, C.; An, Z.; Jiang, C.; Chen, Z.; Wu, Z.; Feng, Y. Lawyer llama technical report. arXiv 2023, arXiv:2305.15062. [Google Scholar]
- Alghamdi, H.M.; Mostafa, A. Towards Reliable Healthcare LLM Agents: A Case Study for Pilgrims during Hajj. Information 2024, 15, 371. [Google Scholar] [CrossRef]
- Silva, B.; Nunes, L.; Estevão, R.; Aski, V.; Chandra, R. GPT-4 as an agronomist assistant? Answering agriculture exams using large language models. arXiv 2023, arXiv:2310.06225. [Google Scholar]
- Li, J.; Xu, M.; Xiang, L.; Chen, D.; Zhuang, W.; Yin, X.; Li, Z. Foundation models in smart agriculture: Basics, opportunities, and challenges. Comput. Electron. Agric. 2024, 222, 109032. [Google Scholar] [CrossRef]
- De Clercq, D.; Nehring, E.; Mayne, H.; Mahdi, A. Large language models can help boost food production, but be mindful of their risks. arXiv 2024, arXiv:2403.15475. [Google Scholar] [CrossRef]
- Zhao, B.; Jin, W.; Del Ser, J.; Yang, G. ChatAgri: Exploring potentials of ChatGPT on cross-linguistic agricultural text classification. Neurocomputing 2023, 557, 126708. [Google Scholar] [CrossRef]
- Yang, S.; Liu, Z.; Mayer, W. ShizishanGPT: An Agricultural Large Language Model Integrating Tools and Resources. arXiv 2024, arXiv:2409.13537. [Google Scholar]
- Zhang, X.; Yang, Q. Self-qa: Unsupervised knowledge guided language model alignment. arXiv 2023, arXiv:2305.11952. [Google Scholar]
- Chung, J.J.Y.; Kamar, E.; Amershi, S. Increasing diversity while maintaining accuracy: Text data generation with large language models and human interventions. arXiv 2023, arXiv:2306.04140. [Google Scholar]
- Honovich, O.; Scialom, T.; Levy, O.; Schick, T. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv 2022, arXiv:2212.09689. [Google Scholar]
- Li, S.; Yang, C.; Yin, Y.; Zhu, X.; Cheng, Z.; Shang, L.; Jiang, X.; Liu, Q.; Yang, Y. Autoconv: Automatically generating information-seeking conversations with large language models. arXiv 2023, arXiv:2308.06507. [Google Scholar]
- Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; Tang, J. Glm: General language model pretraining with autoregressive blank infilling. arXiv 2021, arXiv:2103.10360. [Google Scholar]
- Hsieh, C.-Y.; Li, C.-L.; Yeh, C.-K.; Nakhost, H.; Fujii, Y.; Ratner, A.; Krishna, R.; Lee, C.-Y.; Pfister, T. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. arXiv 2023, arXiv:2305.02301. [Google Scholar]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q.V.; Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 2022, 35, 24824–24837. [Google Scholar]
- Gu, Y.; Dong, L.; Wei, F.; Huang, M. Knowledge distillation of large language models. arXiv 2023, arXiv:2306.08543. [Google Scholar]
- Agarwal, R.; Vieillard, N.; Stanczyk, P.; Ramos, S.; Geist, M.; Bachem, O. Gkd: Generalized knowledge distillation for auto-regressive sequence models. arXiv 2023, arXiv:2306.13649. [Google Scholar]
- Huang, Y.; Chen, Y.; Yu, Z.; McKeown, K. In-context learning distillation: Transferring few-shot learning ability of pre-trained language models. arXiv 2022, arXiv:2212.10670. [Google Scholar]
- Jiang, Y.; Chan, C.; Chen, M.; Wang, W. Lion: Adversarial distillation of proprietary large language models. arXiv 2023, arXiv:2305.12870. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar]
- Lester, B.; Al-Rfou, R.; Constant, N. The power of scale for parameter-efficient prompt tuning. arXiv 2021, arXiv:2104.08691. [Google Scholar]
- GLM, T.; Zeng, A.; Xu, B.; Wang, B.; Zhang, C.; Yin, D.; Rojas, D.; Feng, G.; Zhao, H.; Lai, H. ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools. arXiv 2024, arXiv:2406.12793. [Google Scholar]
- Cui, Y.; Yang, Z.; Yao, X. Efficient and effective text encoding for chinese llama and alpaca. arXiv 2023, arXiv:2304.08177. [Google Scholar]
- Li, H.; Zhang, Y.; Koto, F.; Yang, Y.; Zhao, H.; Gong, Y.; Duan, N.; Baldwin, T. Cmmlu: Measuring massive multitask language understanding in chinese. arXiv 2023, arXiv:2306.09212. [Google Scholar]
- Yang, S.; Yuan, Z.; Li, S.; Peng, R.; Liu, K.; Yang, P. GPT-4 as Evaluator: Evaluating Large Language Models on Pest Management in Agriculture. arXiv 2024, arXiv:2403.11858. [Google Scholar]
- Liu, Y.; Iter, D.; Xu, Y.; Wang, S.; Xu, R.; Zhu, C. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv 2023, arXiv:2303.16634. [Google Scholar]
- Chen, X.; Li, L.; Chang, L.; Huang, Y.; Zhao, Y.; Zhang, Y.; Li, D. Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs). arXiv 2023, arXiv:2310.13343. [Google Scholar]
- Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv 2023, arXiv:2311.05232. [Google Scholar] [CrossRef]
- Feng, M.-s.; Jian, Z. On leaf nutrition DRIS diagnosis of Eucalyptus grandis. Sichuan Nongye Daxue Xuebao 2003, 21, 303–307. [Google Scholar]
- Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.-T.; Rocktäschel, T. Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
- Verma, G.; Choi, M.; Sharma, K.; Watson-Daniels, J.; Oh, S.; Kumar, S. Mysterious Projections: Multimodal LLMs Gain Domain-Specific Visual Capabilities Without Richer Cross-Modal Projections. arXiv 2024, arXiv:2402.16832. [Google Scholar]
- Wei, X.; Wei, H.; Lin, H.; Li, T.; Zhang, P.; Ren, X.; Li, M.; Wan, Y.; Cao, Z.; Xie, B. Polylm: An open source polyglot large language model. arXiv 2023, arXiv:2307.06018. [Google Scholar]
Data Source | Quantity |
---|---|
Web | 2498 |
Database | 2833 |
Text | 373 |
Category | ChatGLM3-6B | Baichuan2-7B | Chinese-llama-7B | GPT-3.5-Turbo | IPM-AgriGPT |
---|---|---|---|---|---|
Professionalism ↑ | 0.4940 | 0.5920 | 0.3740 | 0.6100 | 0.5870 |
Safety ↑ | 0.5800 | 0.6490 | 0.4420 | 0.7230 | 0.6360 |
Effectiveness ↑ | 0.3150 | 0.4110 | 0.2560 | 0.4990 | 0.3920 |
Average | 0.4630 | 0.5510 | 0.3570 | 0.6110 | 0.5380 |
Index | Category | ChatGLM3-6B | Baichuan2-7B | Chinese-llama-7B | GPT-3.5-Turbo | IPM-AgriGPT |
---|---|---|---|---|---|---|
Coherence ↑ | Professionalism | 4.4060 | 4.4475 | 4.2510 | 4.4625 | 4.4120 |
Safety | 4.3400 | 4.4695 | 4.3250 | 4.4810 | 4.4115 | |
Effectiveness | 4.4225 | 4.4640 | 4.0130 | 4.4675 | 4.4595 | |
Consistency ↑ | Professionalism | 4.4245 | 4.4655 | 4.3352 | 4.4755 | 4.5330 |
Safety | 4.4420 | 4.4745 | 4.2016 | 4.4915 | 4.4450 | |
Effectiveness | 4.4195 | 4.4855 | 4.1235 | 4.5715 | 4.5295 | |
Fluency ↑ | Professionalism | 4.4295 | 4.5565 | 4.3520 | 4.5745 | 4.4520 |
Safety | 4.3870 | 4.4360 | 4.3947 | 4.5955 | 4.3975 | |
Effectiveness | 4.4645 | 4.5825 | 4.4462 | 4.6090 | 4.4680 | |
Relevance ↑ | Professionalism | 4.7620 | 4.6250 | 4.8452 | 4.9320 | 4.8595 |
Safety | 4.7010 | 4.835 | 4.7741 | 4.9245 | 4.7860 | |
Effectiveness | 4.7755 | 4.8625 | 4.6530 | 4.7815 | 4.8850 | |
Comprehensibility ↑ | Professionalism | 4.6515 | 4.6650 | 4.4211 | 4.6285 | 4.6540 |
Safety | 4.5575 | 4.5975 | 4.3901 | 4.6300 | 4.5735 | |
Effectiveness | 4.5620 | 4.6490 | 4.4355 | 4.6695 | 4.5700 | |
Exhaustiveness ↑ | Professionalism | 4.4360 | 4.5510 | 4.1058 | 4.6497 | 4.4130 |
Safety | 4.5275 | 4.5459 | 4.0064 | 4.5640 | 4.5105 | |
Effectiveness | 4.5105 | 4.5335 | 3.9528 | 4.5535 | 4.5155 |
Category | ChatGLM3-6B | Baichuan2-7B | Chinese-llama-7B | GPT-3.5-Turbo | IPM-AgriGPT |
---|---|---|---|---|---|
Professionalism ↑ | 4.1287 | 4.1705 | 3.4297 | 4.3120 | 4.3105 |
Safety ↑ | 4.2025 | 4.2525 | 3.6750 | 4.3995 | 4.3505 |
Effectiveness ↑ | 4.1620 | 4.1920 | 3.0125 | 4.3380 | 4.2395 |
Category | ChatGLM3-6B | Baichuan2-7B | Chinese-llama-7B | GPT-3.5-Turbo | IPM-AgriGPT |
---|---|---|---|---|---|
Professionalism ↑ | 4.3624 | 4.3993 | 3.7674 | 4.4971 | 4.4566 |
Safety ↑ | 4.3765 | 4.4368 | 4.0792 | 4.5285 | 4.4526 |
Effectiveness ↑ | 4.3803 | 4.4345 | 4.0029 | 4.5005 | 4.4386 |
Average | 4.3731 | 4.4235 | 3.9498 | 4.5087 | 4.4492 |
Task | Category | ChatGLM3-6B-Tuned | ChatGLM3-6B-G-EA | IPM-AgriGPT |
---|---|---|---|---|
Objective Task ↑ | Professionalism | 0.4420 | 0.5340 | 0.5870 |
Safety | 0.4530 | 0.6040 | 0.6360 | |
Effectiveness | 0.2920 | 0.3240 | 0.3920 | |
Average | 0.3960 | 0.4870 | 0.5380 | |
Subjective Task ↑ | Professionalism | 4.2105 | 4.3215 | 4.4566 |
Safety | 4.0354 | 4.2995 | 4.4526 | |
Effectiveness | 4.2274 | 4.3134 | 4.4386 | |
Average | 4.1577 | 4.3115 | 4.4492 |
Chinese | English | |
---|---|---|
Question | 木瓜锈病主要危害哪些部位,其病原体的形态特征有哪些? | What specific parts of the papaya plant are primarily affected by Gymnosporangium asiaticum [54], and what are the morphological characteristics of its causative pathogen? |
Stander Answer | 木瓜锈病主要危害叶片、叶柄、嫩枝及幼果。病原体是梨胶孢,属于担子菌亚门真菌。此病菌无夏孢子阶段,其冬孢子为双细胞并带有生长柄。性孢子器位于叶面表皮下,呈扁球形性孢子为椭圆形或纺锤形。锈孢子器生长在叶背面,形成丛状毛状物,锈孢子呈近圆形、单细胞,颜色为黄色或浅褐色… | Gymnosporangium asiaticum primarily impacts the leaves, petioles, tender branches, and fruits of the papaya plant. The causative agent is Gymnosporangium asiaticum Miyabe ex Yamada, a fungus classified within the Basidiomycota phylum. This pathogen lacks a summer spore stage; its overwintering spores are bicellular with an accompanying growth stalk. Teliospores reside beneath the leaf epidermis as flattened spheres, while basidiospores appear elliptical or fusiform in shape. Spores proliferate on the undersides of leaves, forming tufted hair-like structures; they are nearly round and unicellular with coloration ranging from yellow to light brown… |
Chatglm3-6B | 木瓜锈病主要危害木瓜的叶片、果实和枝条。其病原体的形态特征包括:菌丝呈粉红色,分支繁殖,分隔泡明显,孢子囊呈椭圆形,内含四个隔膜,孢子囊壁厚,孢子呈椭圆形… | Gymnosporangium asiaticum primarily affects the leaves, fruits, and branches of the papaya plant. The morphological characteristics of this pathogen are as follows: pink hyphae that reproduce through branching; well-defined septa; oval-shaped spore sacs containing four septa; thick walls of the spore sacs; and elliptical spores… |
Baichuan2-7B | 木瓜锈病主要危害木瓜属的叶片、嫩翘和果实。病原体的形态特征主要包括菌丝体、孢子囊和担孢子,菌丝体呈白色,无隔膜,具有吸收功能,孢子囊成串生长,呈球形或椭圆形,包含多个担孢子,担孢子成熟后从孢子囊逸出,遇到适宜条件可萌发成菌丝体,进一步浸染植物… | Gymnosporangium asiaticum primarily targets the leaves, tender branches, and fruits of the papaya genus. The morphological characteristics of this pathogen predominantly include mycelium, spore sacs, and basidiospores. The mycelium is characterized by its white color, non-septate structure, and absorptive functions. Spore sacs develop in clusters and are typically spherical or oval-shaped, containing multiple basidiospores. Upon maturation, the basidiospores are released from the spore sacs and can germinate into mycelium under favorable conditions, thereby facilitating further infection of the host plant… |
Chinese-llama-7B | 木瓜锈病是一种由木瓜锈菌引起的真菌病害,可危害木瓜的叶片和果实,导致锈斑和腐烂。其孢子体呈橙黄色,通过风雨传播,对木瓜产量和品质影响严重… | Gymnosporangium asiaticum is a fungal disease caused by the pathogen Puccinia caricae. It affects papaya leaves and fruits, causing rust spots and rot. The orange-yellow spores spread through wind and rain, severely impacting papaya yield and quality… |
ChatGLM3-6B-Tuned | 木瓜锈病主要危害木瓜的叶片、嫩枝、嫩叶及嫩枝等部位。病原体为真菌性锈菌,简称锈菌。锈菌为黑褐色,分割隔膜,分割隔膜有明显突起,呈网状,其上覆有厚而粗的粉粒。锈菌的锈点为黑褐色,形状为近圆形,每个锈点大小为2毫米,表面有细小的鳞片,每种锈菌锈点仅占就锈菌长度的10%~20%。锈菌的锈点为黑褐色,形状为近圆形,每个锈点大小为2毫米,表面有细小的鳞片,每种锈菌锈点仅占就锈菌长度的10%~20%。锈菌的锈点为黑褐色,形状为近圆形,每个锈点大小… | Gymnosporangium asiaticum primarily targets the leaves, tender branches, young foliage, and tender branches of the papaya plant. This pathogen is classified as a Urediniomycetes, commonly known as Puccinia caricae. The appearance is characterized by a dark brown coloration with segmented septa that are prominently raised, forming a net-like structure covered in thick, coarse powder. The resulting rust spots exhibit a black-brown hue and are nearly round in shape; each spot measures approximately 2 mm in diameter and features tiny scales on its surface. Typically, each type of rust fungus occupies only 10% to 20% of its length with these distinct rust spots. The appearance is characterized by a dark brown coloration with segmented septa that are prominently raised, forming a net-like structure covered in thick, coarse powder. The resulting rust spots exhibit a black-brown hue and are nearly round in shape; each spot measures approximately 2 mm in diameter and features tiny scales on its surface. Typically, each type of rust fungus occupies only 10% to 20% of its length with these distinct rust spots. The appearance is characterized by a dark brown coloration with segmented septa that are prominently raised, forming a net-like structure covered in thick, coarse powder. The resulting rust spots exhibit a black-brown hue and are nearly round in shape; each spot measures approximately… |
ChatGLM3-6B- G-EA | 木瓜锈病主要危害的叶片、叶柄、嫩枝及果实。其病原体的形态特征是叶柄和嫩枝上的锈斑为圆形或椭圆形,颜色为褐色,病斑后期颜色变深,表面覆盖黑色粉末状锈孢子堆… | Gymnosporangium asiaticum primarily targets the leaves, petioles, tender branches, and fruits of host plants. The morphological characteristics of this pathogen are marked by the presence of rust spots on the petioles and tender branches, which exhibit a circular or oval shape and a brown coloration. In the later stages of infection, the lesions darken in color, and their surfaces become covered with black powdery masses of rust spores… |
IPM-AgriGPT | 木瓜锈病的病原体是梨胶孢,叶子、叶柄、嫩枝和未成熟的果实由于其柔软的性质而特别脆弱,使它们更容易受到感染。因此,这种疾病主要针对木瓜植物的茎、枝、叶和果实。这种病原体形态为双细胞并带有生长柄,锈孢子近乎圆形或椭圆形,颜色为黄色或浅棕色… | Gymnosporangium asiaticum is caused by the pathogen Gymnosporangium asiaticum Miyabe ex Yamada. The leaves, petioles, young branches, and immature fruits being particularly vulnerable due to their tender nature, making them more susceptible to infection. Therefore, this disease primarily targets the stems, branches, leaves, and fruits of the papaya plant. This pathogen is characterized by a two-celled structure with growth stalks. The aecidiospore of this pathogen are nearly round or oval-shaped, yellow or light brown in color… |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Fan, Q.; Chen, X.; Li, M.; Zhao, Z.; Li, F.; Guo, L. IPM-AgriGPT: A Large Language Model for Pest and Disease Management with a G-EA Framework and Agricultural Contextual Reasoning. Mathematics 2025, 13, 566. https://doi.org/10.3390/math13040566
Zhang Y, Fan Q, Chen X, Li M, Zhao Z, Li F, Guo L. IPM-AgriGPT: A Large Language Model for Pest and Disease Management with a G-EA Framework and Agricultural Contextual Reasoning. Mathematics. 2025; 13(4):566. https://doi.org/10.3390/math13040566
Chicago/Turabian StyleZhang, Yuqin, Qijie Fan, Xuan Chen, Min Li, Zeying Zhao, Fuzhong Li, and Leifeng Guo. 2025. "IPM-AgriGPT: A Large Language Model for Pest and Disease Management with a G-EA Framework and Agricultural Contextual Reasoning" Mathematics 13, no. 4: 566. https://doi.org/10.3390/math13040566
APA StyleZhang, Y., Fan, Q., Chen, X., Li, M., Zhao, Z., Li, F., & Guo, L. (2025). IPM-AgriGPT: A Large Language Model for Pest and Disease Management with a G-EA Framework and Agricultural Contextual Reasoning. Mathematics, 13(4), 566. https://doi.org/10.3390/math13040566