Previous Article in Journal
Machine Learning-Based Approach for Malicious Node Security and Trust Provision in 5G-Enabled VANET
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Empowering Rural Livestock Health: AI-Powered Early Detection of Cattle Diseases

by
Dammavalam Srinivasa Rao
1,
P. Chandra Sekhar Reddy
2,
Annam Revathi
1,
Vangipuram Sravan Kiran
1,
Nuvvusetty Rajasekhar
3,*,
Nadella Sandhya
4,
Pulipati Venkateswara Rao
5,
Adla Sai Karthik
1 and
Puvvala Jogeeswara Venkata Naga Sai
1
1
Department of Information Technology, VNR Vignana Jyothi Institute of Engineering and Technology, Hyderabad 500118, Telangana, India
2
Department of Computer Science and Engineering, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad 500118, Telangana, India
3
Department of Computer Science and Engineering, Narsimha Reddy Engineering College, Hyderabad 500100, Telangana, India
4
Department of Computer Science and Engineering—AIML & IoT, VNR Vignana Jyothi Institute of Engineering and Technology, Hyderabad 500118, Telangana, India
5
Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad 500043, Telangana, India
*
Author to whom correspondence should be addressed.
AI 2026, 7(4), 137; https://doi.org/10.3390/ai7040137
Submission received: 12 January 2026 / Revised: 9 March 2026 / Accepted: 13 March 2026 / Published: 9 April 2026

Abstract

This paper presents a novel approach for the early detection of cattle diseases. We present a uniquely integrated image classification-based project for real-time cattle disease diagnosis that combines image classification models to identify diseases accurately; a seamless, user-friendly dashboard for real-time monitoring with data visualization and instant predictions; and a mobile application that acts as a data source. The mobile application enables real-time collection of farmer and cattle-related data, including age, number of cattle, vaccination cycles, cattle images, and location metadata. Our AI-based cattle health monitoring project enables the early, efficient, scalable, and timely detection of Lumpy Skin Disease (LSD) and Foot and Mouth Disease (FMD) in cattle with high accuracy. A dataset of approximately 1600 LSD/non-LSD images and 840 FMD images was used to train multiple classification networks such as EfficientNetB0, ResNet50, VGG16, EfficientNetV2B0, and EfficientNetV2S, along with a soft-voting ensemble at inference. The proposed framework achieved a maximum testing accuracy of 98.36% for LSD classification and 99.84% for FMD classification under internal validation. These results indicate strong disease recognition capability, with ensemble-based prediction improving robustness, particularly for FMD classification. The proposed system enables practical, early, efficient, and scalable applications of AI research to improve livestock health monitoring and support the early prevention of widespread disease outbreaks.

Graphical Abstract

1. Introduction

Livestock is considered to be one of the most crucial and highly reliable control centers for any country. India, as one of the most populated countries in the world, is an agrarian economy that is heavily dependent on livestock. Cattle play a pivotal role in rural livelihood and dairy production. In regions such as Telangana, where agriculture and animal husbandry are connected and directly provide income stability and food security, the maintenance and management of these cattle are often neglected, which in turn leads to a decline in overall animal care. Cattle often eat forage together in close proximity; this can be a concern when certain cattle are infected, as the risk of that infectious disease spreading becomes significantly higher. These issues not only affect the well-being and productivity of the cattle but also pose a significant risk to human health, particularly if infectious agents are transmitted through milk and other animal products.
Here, the focus is on two major threats to cattle health, Lumpy Skin Disease (LSD) and Foot and Mouth Disease (FMD), which are considered to be very contagious and can disrupt the livelihoods of both cattle and humans. What is concerning is that the early detection and prevention of such widespread disease outbreaks are not receiving sufficient attention in the domain of livestock health monitoring. In response to these deadly threats, here we introduce an AI-powered early detection system for cattle diseases, which is an integrated image classification-based project for real-time cattle disease diagnosis. The system combines image classification models to identify diseases accurately; a seamless, user-friendly dashboard for real-time monitoring with data visualization and instant predictions; and a mobile application that serves as a data source for both model prediction and dashboard display.
The mobile application, developed using React Native version 0.76.7, helps to capture farmers’ data in real time; these data include age, number of cattle, vaccination cycles of each cow, and location and metadata of each cow. Specific SPOCs (Special Point of Contact) are appointed to interact with the farmers and input the data into the mobile application; they are provided with authentication as only reserved personnel are able to enter, as well as view, the data, which will be discussed further. These images and data are then uploaded onto the centralized Azure Cosmos Database. The deep learning base is constructed with the help of a soft-voting ensembling technique, which comprises models such as EfficientNetB0 [1,2,3], EfficientNetV2B0 [4,5], EfficientNetV2S [6], ResNet50 [7,8,9,10,11], and VGG16 [11,12,13,14]. To perform classification, soft-voting ensembling combines prediction from individual models to provide final labels. The results are displayed in the web dashboard, which is also authenticated such that only veterinary officers have full access to it. This research provides a unique end-to-end platform for early disease detection to restrict the wide spread of disease, without human intervention, allowing for surveillance by offering AI-based solutions that are field-deployable and capable of continuous data collection, testing, and improvement over a long amount of time.

2. Literature Review

We explored integrated digital technologies, including IoT, AI, and cloud computing, as used in [15], to enhance cattle health monitoring. A further improvement was seen in IoT-based research in [16]; these researchers utilized sensors for heart rate, activity, and temperature tracking and cloud computing with machine learning algorithms for cattle health status prediction, along with data visualization using a mobile application. Ref. [17] introduces a unique concept of a digital twin model for cattle health with AI and deep learning principles using the data received from IoT systems and predicts cattle behavior and physiological cycles in real time, along with forecasting behaviors, with high accuracy. Ref. [18] presented the innovation of integrating AI with the STM32BL475IOT1A microcontroller for efficient and compatible cattle health monitoring, along with the use of an Artificial Neural Network, which helps with predicting crucial variables such as temperature and pulse rate. A distinctive approach regarding leveraging advanced image processing and machine learning was initiated by [19] to detect lameness in dairy cattle; a video was used to predict health issues considering patterns in the video of the cattle’s motion. Ref. [20] assessed heat stress on cattle in real time and focused on various other parameters crucial to decipher an overall cattle behavioral pattern to develop efficient measures related to heat mitigation [20]. Ref. [21] establishes a relationship between dairy cattle diseases and non-invasive sensors for health monitoring mainly focusing on sensor technology to monitor and track behavioral changes and integrate disease data through ontology mapping. Refs. [22,23] showcase cattle-health-monitoring solutions that utilize IoT, ThingSpeak, and a mobile app to track physiological parameters like body temperature, heart rate, and activity levels; data are analyzed using the algorithms of MATLABon cloud, which provide farmers with real-time alerts and livestock analysis. Ref. [24] reviews the image processing technologies used for non-invasive cattle monitoring; they mainly focus on weight estimation and individual cattle identification, with particular attention paid to the growing use of deep learning approaches that minimize stress and enhance monitoring methods in cattle to promote their proper health and growth. Ref. [25] introduces a novel video-based mechanism that improves the cattle identification system using an RGB camera, YOLOv8 for detection, VGG for feature extraction, and SVM for final identification; this method is used to identify individual cattle and assign them unique IDs, enhancing the role of AI in cattle monitoring and management. An online cattle-health-monitoring device using Arduino UNO, Arduino NANO, Xbee module, and LabVIEW has been established to track parameters such as heart rate, temperature, rumination, and body humidity to monitor cattle health in real time [26].

3. Materials and Methods

3.1. CHMI Mobile Application

The initial phase of this project involves building the mobile application, which connects and interacts seamlessly with farmers so that data, metadata, and images can be collected, stored, and analyzed at further stages.
The Cattle Health Monitoring Intelligence (CHMI) mobile application is built using the React Native Framework (v0.72) along with the Flask-based REST API (Python 3.10) to connect to the Azure Cosmos Database. A detailed workflow of this procedure is depicted in Figure 1. The mobile application is designed to capture farmer-related information, including Farmer ID, age, and location coordinates. Personal details such as names and contact information were collected during data acquisition; however, all personal identifiers were removed during preprocessing, and the data were anonymized prior to analysis and publication. Cattle details include Cow Ear Tag ID, age, gender, vaccination details, behavior details, and images of the cow from cardinal viewpoints, i.e., left, right, front, and back view images. Images are captured at a minimum resolution of 224 × 224 pixels under natural lighting conditions using the device camera. All collected data are validated client-side to prevent missing fields from being transmitted to the backend. This mobile application has initially been installed on Special Point of Contact (SPOC) devices, and the data collection process has begun in local villages/towns.

3.2. Database Integration

All structured data are stored in Azure Cosmos DB (Core SQL API) using a JSON- based schema, where each document corresponds to a unique cattle instance indexed by Farmer ID and Ear Tag ID. Image files are stored separately in Azure Blob Storage, with their corresponding Blob URLs referenced in Cosmos DB documents to maintain linkage between metadata and image data; all these are retrieved from the mobile application. Azure Cosmos DB is a globally distributed and fully managed NoSQL database. It is designed to build high-performance and scalable applications with low latency and high availability. Records are stored in the database as key-value pairs in JSON format. Azure Blob Storage is a scalable cloud-based object storage service that can store massive amounts of unstructured data such as images, binary data, and text. Together, these services help to efficiently store our data in cloud platforms so that data retrieval and processing are easy. Images uploaded through the mobile application are stored in Blob storage, and the entered text and numerical data are stored in Cosmos DB.

3.3. Overview of Disease Prediction Dataset

3.3.1. Lumpy Skin Disease (LSD)

The LSD classification task is formulated as a binary image classification problem (lumpy vs. normal). The training dataset consists of images collected via the CHMI mobile application combined with two publicly available open-source datasets [27,28]. All datasets were merged and manually cleaned to remove blurred, occluded, and incorrectly labeled images.
To mitigate class imbalance, data augmentation was applied, including brightness adjustment (+20%/−20%), contrast enhancement (+15/−15%), horizontal flipping, and random rotation (+10°/−10°). After preprocessing and augmentation, the final dataset contained 1677 images per class. To prevent data leakage, as multiple images of the cows were captured from different angles, all images belonging to a single cow were assigned exclusively to one data split. Representation samples are shown in Figure 2.

3.3.2. Foot and Mouth Disease (FMD)

The FMD task was formulated as a four-class classification problem distinguishing healthy and unhealthy conditions across the muzzle and leg regions. Images collected through the CHMI platform were combined with an open-source dataset [29], and after data cleaning, each class contained 210 images. To prevent information leakage, the dataset was split at the animal level, ensuring that multiple images of the same cow were assigned exclusively to a single subset. Data augmentation was performed similar to the LSD pipeline, and samples images are shown in Figure 3.

3.3.3. Model Selection

Five models were used in this study, namely EfficientNetB0 [1,2,3], EfficientNetV2B0 [4,5], EfficientNetV2S [6], ResNet50 [7,8,9,10,11], and VGG16 [11,12,13,14], along with the soft-voting ensembling technique [30,31], which involves combining the probabilities or confidence levels of each model rather than the final predicted class. Each model in the ensembling stack predicts a certain probability distribution over the binary classes for LSD the and multiclass for FMD in our case. All models were initialized with ImageNet pretrained weights and fine-tuned on the disease-specific dataset. Input images were resized to 224 × 224 × 3 and normalized to the range [0, 1]. Training was performed using the following parameters: optimizer: Adam; initial learning rate: 1 × 10−4; batch size: 32; loss function: binary cross-entropy (LSD) and categorical cross entropy (FMD); epochs: 50 with early stopping.
To further improve performance, we applied custom preprocessing steps, which involves image augmentation by applying saturation, brightness, and rotational shift and then balancing of class distribution to address dataset imbalance.

3.3.4. Soft-Voting Ensembling Strategy

To improve robustness and generalization, a soft-voting ensembling approach was employed. Instead of selecting the final class label from each model, the predicted class probability distributions from all five modes were averaged to produce the final prediction.
Formally, for a given input image x , the ensembling probability is computed as follows:
P e n s e m b l e x = 1 N i = 1 N P i x
where P i x represents the probability output of the ith model and N = 5.
Soft voting was chosen over hard voting due to its ability to preserve model confidence information and reduce misclassification in borderline cases [30,31,32,33]. The complete inference flow is illustrated in Figure 4. Soft-voting ensemble was evaluated only at inference time; therefore, training accuracy is not reported for the ensemble model. Final predictions are stored in Azure Cosmos DB and reflected in real time on the web dashboard.

3.4. Web Dashboard

The next phase of the project involves building a real-time geospatial dashboard for visualizing data collected using the real-time coordinates that are taken while uploading the data using the mobile application. For this, we employed Streamlit, which is an open-source Python framework that allows us to build quick interactive web applications, and Folium, which is a Python library that can be used to create interactive maps using Leaflet.js, which is a powerful JavaScript library for creating maps. The web dashboard contains the following components: (a) home page; (b) point map; (c) pin map; (d) vaccination heat map; and (e) cattle count. An example of the web dashboard is illustrated in Figure 5.

3.5. Home Page

This is the landing page of the web dashboard, where only authenticated users can access the dashboard. Registered users can log in using their valid credentials to access the dashboard. Figure 6 shows the home page of the dashboard.

3.5.1. Point Map

This displays a point-based map that includes constituency-wise information such as district, mandal, and village. Each point corresponds to an individual cow’s images uploaded through the mobile application. Clicking on a point opens up an image pop-up showing the exact cattle photo associated with that location. This map helps to visualize the distribution of disease-related images across regions. For example, Peddapalle is the district, Mantheni is the mandal, and the village name is Chinna Odala. Figure 7 shows an illustration of the point map. Users can access these maps after successfully logging in. An anonymized list of farmers along with counts of female and male cattle is displayed in the sidebar of the web dashboard.

3.5.2. Pin Map

The pin map displays farmer-level locations. These data points are also retrieved from the input given by the farmers in the mobile application. Each pin corresponds to a unique farmer(anonymized), not each image. When clicked, the pin shows farmer details such as Ear Tag ID, breed type, age, and gender, as shown in Figure 8. The term “Tp9” refers to the Ear Tag ID assigned to each animal. This map is designed for field-level monitoring of farmer profiles.

3.5.3. Vaccination Heat Map

Vaccination heat map is an interactive map like the pin map that allows users to select and filter cattle locations based on specific vaccination types and displays the same information as the pin map. It also includes a donut chart that displays the percentage of cattle vaccinated with the selected vaccination type by the user by mandal, where Ramagiri, Mantheni, and Thadicherla represent different mandals. These are illustrated in Figure 9.

3.5.4. Cattle Count

Cattle count is a visualization chart showing mandal-wise cattle distribution in order to obtain better regional understanding. These are retrieved from the mobile application and the mandal names stored in the database, where Ramagiri, Mantheni, and Thadicherla represent different mandals, as illustrated in Figure 10.

4. Results

Each model was trained using an 80–20 train–test split, with a batch size of 32, image size of 224 × 224 pixels, and up to 50 epochs. Early stopping was applied with patience set to five, meaning that training was halted if there was no improvement in the validation performance for five consecutive epochs and best weights restored to prevent overfitting, and the best-performing weights were restored. Model performance was evaluated using accuracy, precision, recall, and F1-score.
In addition to individual model evaluation, a weighted soft-voting ensemble was constructed by aggregating class probability outputs from multiple pretrained architectures. The ensemble predictions were obtained by selecting the class with the highest weighted probability. The results of the LSD and FMD classification are described in Table 1 and Table 2, respectively. Out of five trained models for binary classification of LSD and non-LSD, EfficientNetB0 outperformed other pretrained models with the highest validation accuracy of 98.36%. The results of FMD-evaluated architectures achieved high testing accuracy ranging from 90.48 to 94.05%. EfficientNetB0 and EfficientNetV2S demonstrated the strongest generalization, achieving the highest testing accuracy of 94.05%. These models are further evaluated by integrating them with the soft-voting ensembling module, and then the final predictions are displayed onto the web dashboard to enable real-time classification. The use cases can be divided into (a) batch processing from the database and (b) instant prediction.

4.1. Classification Performance

Table 1 and Table 2 present the performance of the models for LSD and FMD classification. For LSD (binary: LSD vs. non-LSD), EfficientNetB0 achieved the highest test accuracy of 98.36% and precision, recall, and F1-scores in the range of 0.95–0.98. A weighted soft-voting ensemble combining five trained models achieved a test accuracy of 96.45%, which is comparable to the strongest individual model. While the ensemble did not substantially exceed the best single architecture in terms of accuracy, it provides more stable predictions across samples by reducing model-specific variability.
It can be observed that FMD-evaluated architectures achieved high accuracy (95.82–99.53%) and competitive testing accuracy ranging from 90.48 to 94.05%. EfficientNetB0 and EfficientNetV2S demonstrated the strongest generalization, achieving the highest testing accuracy of 94.05%. The observed gap between training and testing performance indicates memorization risk, which is expected given the limited dataset size and controlled image acquisition conditions. Soft-voting ensemble achieved a test accuracy of 99.84% for FMD classification. This improvement reflects the strong agreement among individual models on visually distinctive lesion patterns. However, given the dataset size and internal validation setting, the ensemble results are interpreted as indicative of prediction stability rather than definitive generalization performance.

4.2. Contextual Comparison with Related Works

To contextualize the performance of the proposed classification models, this section provides a qualitative comparison with recent studies on automated cattle disease detection and classification, particularly focusing on Lumpy Skin Disease (LSD) [34] and external disease classification tasks. It is important to note that direct numerical comparison across studies should be interpreted with caution, as prior works employ different datasets, disease categories, and evaluation protocols.
Alam et al. (2025) [35] developed an automated LSD classification system using Inception-V3 for feature extraction and Support Vector Machines (SVMs) for classification. Their approach achieved an overall accuracy of 84%, with precision, recall, and F1-scores of 80%, 83%, and 82%, respectively. In contrast, the proposed EfficientNetB0 model achieved a significantly higher test accuracy of 98.36% for binary LSD classification, with precision, recall, and F1-scores in the range of 0.95–0.98. This demonstrates a substantial performance improvement over Alam et al.’s machine learning-based approach.
Rony et al. (2023) [36] presented a deep learning-based system that employed conventional CNN architectures (e.g., Inception-V3, VGG-16) to detect multiple external cattle diseases, reporting an overall accuracy of 95%. While this result is strong, it falls short of the performance achieved by the proposed framework. Specifically, for multiclass Foot and Mouth Disease (FMD) classification, the proposed model (EfficientNetB0 and EfficientNetV2S) achieved a higher testing accuracy of 94.05%, which is competitive even under the most challenging multiclass evaluation setting. Moreover, the proposed system demonstrates better generalization, evidenced by a smaller gap between training and testing performance compared with the widely varying accuracies reported by the author.
Dommeti et al. (2023) [37] evaluated multiple deep CNN architectures for LSD detection and observed validation accuracies ranging from 74% to 90%, despite high training accuracies (up to 98%). In comparison, the present study shows a consistently high test performance with EfficientNet-based models. While these results cannot be directly benchmarked against prior studies due to dataset and protocol differences, they indicate that modern lightweight CNN architectures are well-suited for cattle disease image analysis under practical constraints.
Importantly, this work emphasizes system-level integration, including mobile data acquisition, cloud-based inference, and dashboard visualization, which has not consistently been addressed in prior studies. The reported results, therefore, reflect internal validation performance and demonstrate the feasibility of deploying such a model in real-world livestock monitoring pipelines rather than establishing a new benchmark.
In summary, the proposed deep learning framework demonstrates strong performance for both binary and multiclass cattle disease classification under internal validation settings. While the underlying models are based on standard ImageNet fine-tuning, the contribution of this work lies in systematic evaluation across multiple architectures and its integration into a practical, non-invasive livestock disease detection pipeline.

4.3. Error Analysis and Learning Behavior

To further analyze model behavior beyond aggregate performance metrics, confusion matrices were generated for the best-performing models on both LSD and FMD classification tasks, as shown in Figure 11a and Figure 11b, respectively. They indicate that minor misclassification occurs primarily between visually similar regions such as the knuckles and mouth, while healthy classes, particularly healthy foot, is identified with high reliability, highlighting the model’s robustness for disease detection. The matrices indicate broadly balanced performance across classes, without a consistent tendency towards false-negative disease predictions. In addition, learning curves were examined to assess training dynamic and potential overfitting, as shown in Figure 12a,b for LSD classification and Figure 12c,d for FMD classification. The close alignment between training and testing accuracy, along with stable validation loss trends, suggests controlled learning behavior rather than memorization. These analyses provide additional confidence in the reliability of the proposed models for animal health applications.
The close alignment between training and validation accuracy, together with stable validation loss trends, suggests controlled learning behavior. Nevertheless, given the limited dataset size and internal validation settings, some degree of memorization cannot be ruled out.

4.4. Batch Processing from the Database

In this use case, multiple images are retrieved from the database, and predicted labels are stored back in the database. The system visualizes the distribution of healthy and unhealthy cattle, and also the distribution of cattle infected with LSD and FMD and healthy cattle using bar charts. Farmer details, along with cattle labels, are displayed in a table and can be downloaded as a CSV file. This approach is particularly useful for large-scale herd monitoring, where manually analyzing each image might be time-consuming.

4.5. Instant Prediction

In this use case, external images can be tested for LSD and FMD by entering the farmer’s details. The external images will be stored in Azure Blob storage, and labels will be stored in Azure Cosmos DB.

5. Discussion

Our research on the early detection of cattle diseases provides a practical step toward digitizing rural veterinary healthcare through AI-powered image classification that is used in real-time scenarios with actionable insights, along with one-time analysis and visualizations.
For LSD, models achieved accuracy values between 93 and 98%, with corresponding precision, recall, and F1-score values ranging from 0.95 to 0.98. These results suggest that the visual characteristics of LSD are sufficiently separable and distinctive for reliable classification, and the soft-voting ensemble accuracy of 96.45% may not exceed the best single model performance, but it can provide stable predictions. The variation in the accuracies across some models indicates that these models were sensitive to limited variability in the dataset, and further stability can be achieved by the inclusion of additional data from diverse environment and camera conditions.
For FMD, all models have additional metrics such as precision, recall, and F1-score in the range of 0.81–1.00, and soft-voting ensemble reported a test accuracy of 99.84%, reflecting strong agreement among the individual models. These results were obtained as the dataset contains relatively few images per class, and many share similar background and lighting and orientation conditions, which may make the task easier than expected. A gap was observed between the training and testing datasets, and this should be addressed in future studies.
However, most existing studies acknowledge the problems faced due to a lack of datasets and high-quality images. The use of soft-voting ensemble combines the probability outputs of multiple models, improving robustness, reducing misclassification, and focusing on employing system-level design during deployment. The ensemble was not evaluated as a standalone model against individual architectures; therefore, no quantitative performance gain is claimed over the best-performing single model. Instead, the ensemble was used to mitigate model-specific biases and improve prediction consistency under varying image conditions. While a direct comparison with single models was not performed, prior studies show that soft-voting ensembling generally enhances predictive performance [31,32]. In our system, this ensembling technique helped with handling a variety of images irrespective of lighting and camera conditions. This observation in our study aligns with previous findings and reinforces the need for large, more diverse datasets. Another important aspect is the integration of the classification models into a real-time system. The practical dashboards and cloud pipeline confirm the feasibility of deployment on a much larger scale, but their effectiveness ultimately depends on the robustness and performance of the models.
Our future work will focus on addressing these limitations. Expanding the dataset with high-quality images of varied breeds, more camera types, and mixed orientations will help to reduce overfitting and improve generalization. Further explainability techniques like Grad-CAM and SHAP will be integrated to visually interpret model decisions. This will help veterinary practitioners to understand why a model has certain predictions so that we can create a feedback loop that aids in training and continuous improvement to ensure long-term effectiveness. Overall, our research stands as a promising innovation that promotes the combination of agriculture, artificial intelligence, and rural development. It demonstrates how thoughtfully designed AI systems can bring about practical, real-world benefits to underserved communities by improving livestock health, minimizing economic loss, and promoting sustainable farming practices that help farmers and improve their livelihood in as many ways as possible.

Author Contributions

Conceptualization, D.S.R. and P.C.S.R.; methodology, A.R., A.S.K. and P.J.V.N.S.; software, A.S.K., N.S. and V.S.K.; validation, N.R., P.V.R. and P.J.V.N.S.; formal analysis, A.R. and N.S.; investigation, D.S.R., V.S.K. and A.S.K.; resources, P.C.S.R. and P.V.R.; data curation, N.S. and P.J.V.N.S.; writing—original draft, A.S.K. and A.R.; writing—review and editing, D.S.R., P.C.S.R. and N.R.; visualization, A.S.K. and N.S.; supervision, D.S.R. and P.C.S.R.; project administration, D.S.R.; funding acquisition, none. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study as it involved non-interventional field data collection of cattle images and associated information. No medical or invasive procedures were conducted, and all data were anonymized prior to analysis and publication.

Informed Consent Statement

Verbal informed consent was obtained from all farmers involved in the study prior to their participation. Verbal consent was used instead of written consent due to the participants’ familiarity with oral agreements in field conditions.

Data Availability Statement

New data from the mobile device were directly uploaded by the farmers for testing; the rest of the dataset presented in this study is included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:1905.11946. [Google Scholar]
  2. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
  3. Marques, G.; Agarwal, D.; De la Torre Díez, I. Automated Medical Diagnosis of COVID-19 through EfficientNet Convolutional Neural Network. Appl. Soft Comput. 2020, 96, 106691. [Google Scholar] [CrossRef]
  4. Tan, M.; Le, Q. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of the International Conference on Machine Learning; PMLR: New York, NY, USA, 2021; pp. 10096–10106. [Google Scholar]
  5. Pacal, I.; Celik, O.; Bayram, B.; Cunha, A. Enhancing EfficientNetV2 with Global and Efficient Channel Attention Mechanisms for Accurate MRI-Based Brain Tumor Classification. Clust. Comput. 2024, 27, 11187–11212. [Google Scholar] [CrossRef]
  6. Nethravathi, P.; Vaikunta, P. Early Diagnosis of Lung Diseases with Deep Learning Using EfficientNetV2-S Architecture. In Proceedings of the 2025 International Conference on Inventive Computation Technologies (ICICT); IEEE: Piscataway, NJ, USA, 2025; pp. 436–441. [Google Scholar]
  7. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
  8. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  9. Targ, S.; Almeida, D.; Lyman, K. ResNet in ResNet: Generalizing Residual Architectures. arXiv 2016, arXiv:1603.08029. [Google Scholar] [CrossRef]
  10. Wu, Z.; Shen, C.; Van Den Hengel, A. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognit. 2019, 90, 119–133. [Google Scholar] [CrossRef]
  11. Theckedath, D.; Sedamkar, R.R. Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks. SN Comput. Sci. 2020, 1, 79. [Google Scholar] [CrossRef]
  12. Simonyan, K. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  13. Apostolopoulos, I.D.; Mpesiana, T.A. COVID-19: Automatic Detection from X-ray Images Utilizing Transfer Learning with Convolutional Neural Networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef]
  14. Albashish, D.; Al-Sayyed, R.; Abdullah, A.; Ryalat, M.H.; Almansour, N.A. Deep CNN Model Based on VGG16 for Breast Cancer Classification. In Proceedings of the 2021 International Conference on Information Technology (ICIT); IEEE: Piscataway, NJ, USA, 2021; pp. 805–810. [Google Scholar]
  15. Singh, D.; Singh, R.; Gehlot, A.; Akram, S.V.; Priyadarshi, N.; Twala, B. An Imperative Role of Digitalization in Monitoring Cattle Health for Sustainability. Electronics 2022, 11, 2702. [Google Scholar] [CrossRef]
  16. Darvesh, K.; Khande, N.; Avhad, S.; Khemchandani, M. IoT and AI Based Smart Cattle Health Monitoring. J. Livest. Sci. 2023, 14, 211–218. [Google Scholar] [CrossRef]
  17. Han, X.; Lin, Z.; Clark, C.; Vucetic, B.; Lomax, S. AI-Based Digital Twin Model for Cattle Caring. Sensors 2022, 22, 7118. [Google Scholar] [CrossRef]
  18. Vanga, S.R.; Reddy, T.N.; Venkatesh, C.; Chandu, B.H.S. Efficient Cattle Health Monitoring Using a Compact AI Model. In Proceedings of the 2024 5th IEEE Global Conference for Advancement in Technology (GCAT); IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  19. Roopaei, M.; Bergmann, C.; Azemi, A.; Hardyman, K.; Hampton, J. Advancing Cattle Health: AI-Driven Innovations in Lameness Detection and Management. In Proceedings of the 2024 IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC); IEEE: Piscataway, NJ, USA, 2024; pp. 98–104. [Google Scholar]
  20. Herbut, P.; Angrecka, S.; Godyń, D.; Hoffmann, G. The Physiological and Productivity Effects of Heat Stress in Cattle—A Review. Ann. Anim. Sci. 2019, 19, 579–593. [Google Scholar] [CrossRef]
  21. Awasthi, A.; Riordan, D.; Walsh, J. Non-Invasive Sensor Technology for the Development of a Dairy Cattle Health Monitoring System. Computers 2016, 5, 23. [Google Scholar] [CrossRef]
  22. Batla, A.; Kikani, Y.; Joshi, D.; Jain, R.; Patel, K. Real Time Cattle Health Monitoring Using IoT, ThingSpeak, and a Mobile Application. J. Ethol. Anim. Sci. 2023, 5. Available online: https://ssrn.com/abstract=4547333 (accessed on 3 October 2023).
  23. Kumar, A.; Vardhan, V.H.; Swetha, J.; Shanmuga, P.R.; Mishra, P. Internet-Based Cattle Health Monitoring System Using Raspberry Pi. Int. J. Health Sci. 2022, 6, 1112–1120. [Google Scholar] [CrossRef]
  24. Zurnawita, Z.; Prabowo, C.; Kurnia, R.; Elfitri, I. A Review of Image Processing Technique for Monitoring the Growth and Health of Cows. J. Inf. Technol. Comput. Eng. 2023, 7, 8–18. [Google Scholar] [CrossRef]
  25. Mon, S.L.; Onizuka, T.; Tin, P.; Aikawa, M.; Kobayashi, I.; Zin, T.T. AI-Enhanced Real-Time Cattle Identification System through Tracking across Various Environments. Sci. Rep. 2024, 14, 17779. [Google Scholar] [CrossRef]
  26. Swain, K.B.; Mahato, S.; Patro, M.; Pattnayak, S.K. Cattle Health Monitoring System Using Arduino and LabVIEW for Early Detection of Diseases. In Proceedings of the 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS); IEEE: Piscataway, NJ, USA, 2017; pp. 79–82. [Google Scholar]
  27. Warcoder. Lumpy Skin Images Dataset. Available online: https://www.kaggle.com/datasets/warcoder/lumpy-skin-images-dataset (accessed on 3 October 2023).
  28. Agarwal, S. Cow Lumpy Disease Dataset. Available online: https://www.kaggle.com/datasets/shivamagarwal29/cow-lumpy-disease-dataset (accessed on 3 October 2023).
  29. Grubman, M.J.; Baxt, B. Foot-and-Mouth Disease. Clin. Microbiol. Rev. 2004, 17, 465–493. [Google Scholar] [CrossRef]
  30. Kumari, S.; Kumar, D.; Mittal, M. An Ensemble Approach for Classification and Prediction of Diabetes Mellitus Using Soft Voting Classifier. Int. J. Cogn. Comput. Eng. 2021, 2, 40–46. [Google Scholar] [CrossRef]
  31. Salur, M.U.; Aydın, İ. A Soft Voting Ensemble Learning-Based Approach for Multimodal Sentiment Analysis. Neural Comput. Appl. 2022, 34, 18391–18406. [Google Scholar] [CrossRef]
  32. Jabbar, H.G. Advanced Threat Detection Using Soft and Hard Voting Techniques in Ensemble Learning. J. Robot. Control 2024, 5, 1104–1116. [Google Scholar]
  33. Taha, A. Intelligent Ensemble Learning Approach for Phishing Website Detection Based on Weighted Soft Voting. Mathematics 2021, 9, 2799. [Google Scholar] [CrossRef]
  34. Coetzer, J.; Tuppurainen, E. Lumpy Skin Disease. Infect. Dis. Livest. 2004, 2, 1268–1276. [Google Scholar]
  35. Alam, F.; Ullah, A.; Rohaim, M.A.; Munir, M.; Hussain, A. An Automatic Approach for the Classification of Lumpy Skin Disease in Cattle. Trop. Anim. Health Prod. 2025, 57, 230. [Google Scholar] [CrossRef]
  36. Rony, M.; Barai, D.; Riad; Hasan, Z. Cattle External Disease Classification Using Deep Learning Techniques. In Proceedings of the 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India, 6–8 July 2021; pp. 1–7. [Google Scholar] [CrossRef]
  37. Dommeti, D.; Nallapati, S.R.; Lokesh, C.; Bhuvanesh, S.P.; Padyala, V.V.P.; Srinivas, P.V.V.S. Deep Learning Based Lumpy Skin Disease (LSD) Detection. In Proceedings of the 3rd International Conference on Smart Data Intelligence (ICSMDI), Trichy, India, 30–31 March 2023; pp. 457–465. [Google Scholar] [CrossRef]
Figure 1. Mobile application workflow.
Figure 1. Mobile application workflow.
Ai 07 00137 g001
Figure 2. Sample images of cows with lumpy skin and normal skin.
Figure 2. Sample images of cows with lumpy skin and normal skin.
Ai 07 00137 g002
Figure 3. Sample images of cattle infected with FMD and normal cattle.
Figure 3. Sample images of cattle infected with FMD and normal cattle.
Ai 07 00137 g003
Figure 4. Model prediction flow graph.
Figure 4. Model prediction flow graph.
Ai 07 00137 g004
Figure 5. Example of the web dashboard.
Figure 5. Example of the web dashboard.
Ai 07 00137 g005
Figure 6. Home page of the web dashboard along with authentication screen.
Figure 6. Home page of the web dashboard along with authentication screen.
Ai 07 00137 g006
Figure 7. Illustration of point map.
Figure 7. Illustration of point map.
Ai 07 00137 g007
Figure 8. Illustration of pin map.
Figure 8. Illustration of pin map.
Ai 07 00137 g008
Figure 9. Illustration of donut chart in vaccination heat map.
Figure 9. Illustration of donut chart in vaccination heat map.
Ai 07 00137 g009
Figure 10. Illustration of cattle count distribution chart.
Figure 10. Illustration of cattle count distribution chart.
Ai 07 00137 g010
Figure 11. Confusion matrix illustrating the classification performance of EfficientNetB0 model for (a) LSD classification and (b) FMD classification.
Figure 11. Confusion matrix illustrating the classification performance of EfficientNetB0 model for (a) LSD classification and (b) FMD classification.
Ai 07 00137 g011
Figure 12. Learning curves showing training and validation accuracy and loss during model training for (a,b) LSD classification and (c,d) FMD classification using EfficientNetB0.
Figure 12. Learning curves showing training and validation accuracy and loss during model training for (a,b) LSD classification and (c,d) FMD classification using EfficientNetB0.
Ai 07 00137 g012
Table 1. LSD classification results of each model.
Table 1. LSD classification results of each model.
ModelTraining Accuracy (%)Testing Accuracy (%)
EfficientNetB098.1498.36
ResNet5098.8197.62
EfficientNetV2B096.4697.47
VGG1696.5793.44
EfficientNetV2S94.1195.38
Soft-Voting Ensemble-96.45
Table 2. FMD classification results of each model.
Table 2. FMD classification results of each model.
ModelTraining Accuracy (%)Testing Accuracy (%)
EfficientNetB099.4794.05
ResNet5099.5390.48
EfficientNetV2B099.2391.07
VGG1695.8290.48
EfficientNetV2S96.6694.05
Soft-Voting Ensemble-99.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rao, D.S.; Reddy, P.C.S.; Revathi, A.; Kiran, V.S.; Rajasekhar, N.; Sandhya, N.; Rao, P.V.; Karthik, A.S.; Sai, P.J.V.N. Empowering Rural Livestock Health: AI-Powered Early Detection of Cattle Diseases. AI 2026, 7, 137. https://doi.org/10.3390/ai7040137

AMA Style

Rao DS, Reddy PCS, Revathi A, Kiran VS, Rajasekhar N, Sandhya N, Rao PV, Karthik AS, Sai PJVN. Empowering Rural Livestock Health: AI-Powered Early Detection of Cattle Diseases. AI. 2026; 7(4):137. https://doi.org/10.3390/ai7040137

Chicago/Turabian Style

Rao, Dammavalam Srinivasa, P. Chandra Sekhar Reddy, Annam Revathi, Vangipuram Sravan Kiran, Nuvvusetty Rajasekhar, Nadella Sandhya, Pulipati Venkateswara Rao, Adla Sai Karthik, and Puvvala Jogeeswara Venkata Naga Sai. 2026. "Empowering Rural Livestock Health: AI-Powered Early Detection of Cattle Diseases" AI 7, no. 4: 137. https://doi.org/10.3390/ai7040137

APA Style

Rao, D. S., Reddy, P. C. S., Revathi, A., Kiran, V. S., Rajasekhar, N., Sandhya, N., Rao, P. V., Karthik, A. S., & Sai, P. J. V. N. (2026). Empowering Rural Livestock Health: AI-Powered Early Detection of Cattle Diseases. AI, 7(4), 137. https://doi.org/10.3390/ai7040137

Article Metrics

Back to TopTop