Early Detection of Avian Diseases Based on Thermography and Artificial Intelligence

Simple Summary From an economic point of view, timely information about the flock state is crucial for poultry farmers. When a flock is infected with a disease, if quick and necessary measures are not taken, the disease will spread and affect the whole flock. Artificial intelligence is one of the popular methods in precision livestock farming and is effective in various fields such as weight measurement, feed intake estimation, and disease diagnosis. So far, chicken disease has been diagnosed using sound signal processing and video recordings. This study attempted to develop a new and rapid method of poultry disease diagnosis based on thermography for data collection and artificial intelligence for data analytics. With the proposed method, Avian Influenza and Newcastle Disease can be detected within 24 h after virus infection. Abstract Non-invasive measures have a critical role in precision livestock and poultry farming as they can reduce animal stress and provide continuous monitoring. Animal activity can reflect physical and mental states as well as health conditions. If any problems are detected, an early warning will be provided for necessary actions. The objective of this study was to identify avian diseases by using thermal-image processing and machine learning. Four groups of 14-day-old Ross 308 Broilers (20 birds per group) were used. Two groups were infected with one of the following diseases: Newcastle Disease (ND) and Avian Influenza (AI), and the other two were considered control groups. Thermal images were captured every 8 h and processed with MATLAB. After de-noising and removing the background, 23 statistical features were extracted, and the best features were selected using the improved distance evaluation method. Support vector machine (SVM) and artificial neural networks (ANN) were developed as classifiers. Results indicated that the former classifier outperformed the latter for disease classification. The Dempster–Shafer evidence theory was used as the data fusion stage if neither ANN nor SVM detected the diseases with acceptable accuracy. The final SVM-based framework achieved 97.2% and 100% accuracy for classifying AI and ND, respectively, within 24 h after virus infection. The proposed method is an innovative procedure for the timely identification of avian diseases to support early intervention.


Introduction
A modern broiler house accommodates tens of thousands of animals, and the number is more for layer houses. The high rearing stocking density is favorable for economic profits but increases the risks of bacterial/virus transmission and causing diseases [1]. For instance, the USDA Animal and Plant Health Inspection Service reported that by 18 May 2023, a The virus infection was verified based on clinical signs, RT-PCR test, and virus isolation from infected tissues. The RT-PCR test was conducted using protocol 2, which was described in [26]. Birds needed feed and water intake during the first 8 h after the lights were turned on, and bird infection was conducted after that to avoid bird stress. A total of seven sections were planned. Section 1 indicates the 8th hour after disease infection, Section 2 indicates the 16th hour after disease infection, …, and Section 7 indicates the 56th hour after disease infection.
Thermal images were captured using a FLIR a65 thermal camera with a resolution of 640 × 512 pixels. The camera has good performance within the ambient temperature of −25 to 135 °C (Figure 2a). The captured images were loaded into FLIR Tools software version 4.1 to remove background (Figure 2b), and then the preprocessed images were further enhanced in MATLAB 2020 (Math works Ins., Natic, MA, USA). No existing packages were used in the following sections, and all steps were coded with the listed formulas. The total number of chickens was 80 (40 birds for ND and control samples and 40 birds for AI and control samples), and 240 thermal images were collected from each section (three images were taken from each bird). The experiment was performed within three consecutive days after virus infection. The emissivity of thermography and distance The virus infection was verified based on clinical signs, RT-PCR test, and virus isolation from infected tissues. The RT-PCR test was conducted using protocol 2, which was described in [26]. Birds needed feed and water intake during the first 8 h after the lights were turned on, and bird infection was conducted after that to avoid bird stress. A total of seven sections were planned. Section 1 indicates the 8th hour after disease infection, Section 2 indicates the 16th hour after disease infection, . . . , and Section 7 indicates the 56th hour after disease infection.
Thermal images were captured using a FLIR a65 thermal camera with a resolution of 640 × 512 pixels. The camera has good performance within the ambient temperature of −25 to 135 • C (Figure 2a). The captured images were loaded into FLIR Tools software version 4.1 to remove background (Figure 2b), and then the preprocessed images were further enhanced in MATLAB 2020 (Math works Ins., Natic, MA, USA). No existing packages were used in the following sections, and all steps were coded with the listed formulas. The total number of chickens was 80 (40 birds for ND and control samples and 40 birds for AI and control samples), and 240 thermal images were collected from each section (three images were taken from each bird). The experiment was performed within three consecutive days after virus infection. The emissivity of thermography and distance between the camera and birds were 0.95 and 50 cm, respectively. The close-distance data collection can be achieved by movable robotic systems once available. As ambient temperature and relative humidity could influence the accuracy of thermography, they were measured for each image using a digital temperature humidity meter (Figure 3). between the camera and birds were 0.95 and 50 cm, respectively. The close-distan collection can be achieved by movable robotic systems once available. As ambient ature and relative humidity could influence the accuracy of thermography, th measured for each image using a digital temperature humidity meter ( Figure 3).  The proposed algorithm framework to identify avian diseases (ND and AI) b thermal images and machine learning is shown in Figure 4. The thermal images o ens were captured by FLIR camera and preprocessed in FLIR software and Mat software. The preprocessing steps included de-noising, background removal, an enhancement (erosion and dilation) using Image Region Analyzer and Image Seg toolbox. In the data mining step, 23 statistical features were extracted from each and the best features were selected. The best features were considered as input machine learning classifier (SVM and ANN). The outputs of the classifiers were im by the Dempster-Shafer (D-S) evidence theory, after which the diseases were det between the camera and birds were 0.95 and 50 cm, respectively. The close-distance dat collection can be achieved by movable robotic systems once available. As ambient temper ature and relative humidity could influence the accuracy of thermography, they wer measured for each image using a digital temperature humidity meter ( Figure 3).  The proposed algorithm framework to identify avian diseases (ND and AI) based on thermal images and machine learning is shown in Figure 4. The thermal images of chick ens were captured by FLIR camera and preprocessed in FLIR software and Matlab 202 software. The preprocessing steps included de-noising, background removal, and imag enhancement (erosion and dilation) using Image Region Analyzer and Image Segmente toolbox. In the data mining step, 23 statistical features were extracted from each image and the best features were selected. The best features were considered as inputs of th machine learning classifier (SVM and ANN). The outputs of the classifiers were improved by the Dempster-Shafer (D-S) evidence theory, after which the diseases were detected. The proposed algorithm framework to identify avian diseases (ND and AI) based on thermal images and machine learning is shown in Figure 4. The thermal images of chickens were captured by FLIR camera and preprocessed in FLIR software and Matlab 2020 software. The preprocessing steps included de-noising, background removal, and image enhancement (erosion and dilation) using Image Region Analyzer and Image Segmenter toolbox. In the data mining step, 23 statistical features were extracted from each image, and the best features were selected. The best features were considered as inputs of the machine learning classifier (SVM and ANN). The outputs of the classifiers were improved by the Dempster-Shafer (D-S) evidence theory, after which the diseases were detected.
Deep learning-based techniques, such as convolutional neural networks, can process thermal images directly without feature extraction but require large number of images to obtain robust performance, which we did not have in this study. Instead, extracting the features from thermal images followed by classical machine learning modeling can help us better understand which features are important to gain accurate performance with small datasets. The machine learning-based methods were also GPU-free (without graphical processing units), which could be economically friendly for poultry producers as well.  Deep learning-based techniques, such as convolutional neural networks, can process thermal images directly without feature extraction but require large number of images to obtain robust performance, which we did not have in this study. Instead, extracting the features from thermal images followed by classical machine learning modeling can help us better understand which features are important to gain accurate performance with small datasets. The machine learning-based methods were also GPU-free (without graphical processing units), which could be economically friendly for poultry producers as well.

Feature Extraction and Selection
The raw data contained relatively little information for classification and were not used directly as input for the classifiers. Therefore statistical features were extracted based on intensity information on thermal images. Table 1 shows the formulas for calculating the 23 statistical features, in which x(n) is the intensity for data points n on a thermal image (n = 1, 2, …, N) [27].

Feature Extraction and Selection
The raw data contained relatively little information for classification and were not used directly as input for the classifiers. Therefore statistical features were extracted based on intensity information on thermal images. Table 1 shows the formulas for calculating the 23 statistical features, in which x(n) is the intensity for data points n on a thermal image (n = 1, 2, . . . , N) [27].
The average deviation from the mean Root mean square Third central moment divided by the std The third central moment Crest factor The fourth central moment Maximum divided by the Quadratic mean square The fifth central moment Root mean square divided by the mean The sixth central moment Impulse factor The fourth central moment divided by the square of the variance The sum of squares Harmonic mean The 23 features, if all fed into the classifiers, could increase model complexity. So, the improved distance evaluation (IDE) procedure has been used for feature dimensionality reduction. Seven steps shown in Figure 5 should be conducted to execute the IDE. Firstly, the mean distance and variance must be computed for intra-class and inter-class. Then, in the fifth step, the reward factor was computed. The best features included the largest intra-class and the lowest inter-class differences. Based on that, the sixth step was to calculate the difference scores of intra-class and inter-class, which was normalized in the seventh step. Finally, the best features were selected based on an arbitrary threshold [28]. The threshold is determined in Section 3.1.
The sum of squares = ( ( )) The 23 features, if all fed into the classifiers, could increase model complexity. So, the improved distance evaluation (IDE) procedure has been used for feature dimensionality reduction. Seven steps shown in Figure 5 should be conducted to execute the IDE. Firstly, the mean distance and variance must be computed for intra-class and inter-class. Then, in the fifth step, the reward factor was computed. The best features included the largest intraclass and the lowest inter-class differences. Based on that, the sixth step was to calculate the difference scores of intra-class and inter-class, which was normalized in the seventh step. Finally, the best features were selected based on an arbitrary threshold [28]. The threshold is determined in Section 3.1.

Artificial Neural Network
The ANNs are biologically-inspired computer programs designed to simulate the way in which a human brain processes information. The ANN included an input layer, a hidden layer, and an output layer ( Figure 6). Each layer was connected with neurons, and the number of neurons in the input layer was determined by the number of the best features selected by IDE. The number of classes determined the number of neurons in the output layer, and the number of neurons in the hidden layer was decided by trial and error during model training. Various ANNs have been applied in classification, regression, and modeling [29]. The optimal ANN was used to detect avian diseases in this study.

Artificial Neural Network
The ANNs are biologically-inspired computer programs designed to simulate the way in which a human brain processes information. The ANN included an input layer, a hidden layer, and an output layer ( Figure 6). Each layer was connected with neurons, and the number of neurons in the input layer was determined by the number of the best features selected by IDE. The number of classes determined the number of neurons in the output layer, and the number of neurons in the hidden layer was decided by trial and error during model training. Various ANNs have been applied in classification, regression, and modeling [29]. The optimal ANN was used to detect avian diseases in this study.
Animals 2023, 13, x FOR PEER REVIEW Figure 6. Architecture of the Artificial Neural Network.

Support Vector Machine
The SVM is a robust classifier first introduced by Cortes and Vapnic in 1995, b the statistical learning theory [30]. The SVM intends to maximize the margin betw two classes. Separating classes using a hypothetical hyperplane is the main ide hyperplanes include the linear, quadratic, and Gaussian Radial Basic Function (RB Figure 7a shows the optimal margin in a linear hyperplane [32], and Figure 7b sh

Support Vector Machine
The SVM is a robust classifier first introduced by Cortes and Vapnic in 1995, building the statistical learning theory [30]. The SVM intends to maximize the margin between the two classes. Separating classes using a hypothetical hyperplane is the main idea. Some hyperplanes include the linear, quadratic, and Gaussian Radial Basic Function (RBF) [31]. Figure 7a shows the optimal margin in a linear hyperplane [32], and Figure 7b shows the RBF hyperplane [33]. The SVM was initially introduced for binary classification (two classes) and then applied to solve multiclass problems. The RBF was selected for the hyperplane in the SVM due to its optimal performance [34].

Support Vector Machine
The SVM is a robust classifier first introduced by Cortes and Vapnic in 1995, b the statistical learning theory [30]. The SVM intends to maximize the margin betw two classes. Separating classes using a hypothetical hyperplane is the main idea hyperplanes include the linear, quadratic, and Gaussian Radial Basic Function (RB Figure 7a shows the optimal margin in a linear hyperplane [32], and Figure 7b sho RBF hyperplane [33]. The SVM was initially introduced for binary classification (tw ses) and then applied to solve multiclass problems. The RBF was selected for the plane in the SVM due to its optimal performance [34].

Dempster-Shafer Evidence Theory
The D-S evidence theory was first proposed by Dempster and further develop refined by Shafer [35]. The D-S has been used in fault diagnosis [36] and disease dia [27]. This theory investigates aspects connected with uncertainty and lack of kno and is favorable for solving real-life problems [37]. In this study, whenever ANN an cannot detect the diseases with acceptable accuracy, the D-S will be used in data stage.

Classifier Evaluation Metrics
The metrics to evaluate the developed models included Sensitivity, Specificity ing, and Testing accuracy for healthy ("acc_healthy") and unhealthy ("acc_unhe birds. True Positive, False Positive, True Negative, and False Negative are defined i 2 based on a confusion matrix and were calculated to determine the evaluation m The calculation procedures of sensitivity, specificity, and accuracy are presented in tions (1)- (6).

Dempster-Shafer Evidence Theory
The D-S evidence theory was first proposed by Dempster and further developed and refined by Shafer [35]. The D-S has been used in fault diagnosis [36] and disease diagnosis [27]. This theory investigates aspects connected with uncertainty and lack of knowledge and is favorable for solving real-life problems [37]. In this study, whenever ANN and SVM cannot detect the diseases with acceptable accuracy, the D-S will be used in data fusion stage.

Classifier Evaluation Metrics
The metrics to evaluate the developed models included Sensitivity, Specificity, Training, and Testing accuracy for healthy ("acc_healthy") and unhealthy ("acc_unhealthy") birds. True Positive, False Positive, True Negative, and False Negative are defined in Table 2 based on a confusion matrix and were calculated to determine the evaluation metrics. The calculation procedures of sensitivity, specificity, and accuracy are presented in Equations (1)- (6).

Data-Mining Results
In this stage, 23 statistical features were extracted from all raw data collected. The AI infection group and the corresponding control group were defined as Group A; the ND infection group and the corresponding control group were defined as Group B. Each feature was scored by IDE, and the best features, which had the most scores, were selected. The outputs of IDE were used as inputs of the classifiers. Tables 3 and 4 show the feature scores for Groups A and B, respectively. The best threshold for Group A is 0.7 because there was a significant difference between 0.7 and other lower threshold limits. A closer-to-one value indicates better feature quality for classification. So, all features with scores of 0.7 or higher were favorable for classification for Group A. Therefore, F2 (maximum), F4 (quadratic mean square), F9 (root mean square divided by the mean), F21 (the sixth central moment), and F22 (the fourth central moment divided by the square of the variance), which had the feature score of over 0.7 in all seven sections, were selected to identify the AI disease.  Based on Table 4, the best threshold for Group B was 0.8 because there was a significant difference between 0.8 and other lower threshold limits. The F2 (maximum), F3 (standard deviation), F9 (root mean square divided by the mean), F16 (Skewness), and F17 (kurtosis), which had a feature score of over 0.7 in all seven sections, were selected to identify the ND disease.
The thresholds (0.7 for Group A and 0.8 for Group B) were selected manually based on the analysis of the feature scores. For Group A, the difference between the scores for Section 3 and Section 5 does not look significant (same for Group B, Section 4). But most of the feature scores fall into the set thresholds. We wanted to keep consistent thresholds for all groups.

Classifier Performance
This study included 120 thermal images for each group. Table 5 shows the number of thermal images and data splitting for the ANN and SVM development. The data splitting for ANN was 70% for training, 15% for cross-validation, and 15% for testing. The splitting for SVM was 70% for training and 30% for testing. The classifiers were cross-validated, and the average performance was reported. The performance was also used to determine the model parameters, such as Sigma and C for SVM and the number of neurons in the hidden layer for ANN.

ANN Performance
As the outputs of IDE were five features for both groups of AI and ND, the number of neurons in the input layer was five, the number of neurons in the output layer was two (because each group had two classes), and the number of neurons in the hidden layer was determined as eight for AI and seven for ND based on the evaluation metric performance. Table 6 shows the ANN performance to detect the AI for all the times for data collection (sections) with the structure of 5 × 8 × 2. The testing accuracy of ANN to detect AI in Section 1 (the 8th hour after virus infection) was 70.37%, while its validation accuracy was 75.93%. Finally, this accuracy reached 100% in Section 7 (the 56th hour after virus infection). The acceptable accuracy of 92.59% was obtained in Section 4. The Specificity in Section 3 or later was 100% which means the ANN did not diagnose any True Positives (the real sickness) as healthy. The Sensitivity in Section 4 was 85.71%, indicating that the classifier cannot accurately exclude False Negative (wrongly identifying healthy birds as unhealthy).  Table 7 shows the ANN performance to detect the ND for all the times for data collection (sections) with the structure of 5 × 7 × 2. The test accuracy of ANN to detect ND was 68.52% in Section 1 and 100% in Section 5. In all sections, the specificity was higher than sensitivity which means that the main problem of ANN was to misdiagnose the healthy birds as unhealthy.

SVM Performance
The outputs of IDE were used as inputs of SVM. The kernel function of RBF and the solver of Sequential Minimal Optimization and Sigma (σ) of 1 were selected as key parameters for SVM ( Table 8). The box constraint was 5 for AI prediction and 10 for ND classification.  Table 9 shows the SVM performance to detect the AI in all sections. The classifier obtained 97.22% testing accuracy in Section 3 (the 24th hour after virus infection). The testing accuracy dropped to 77.78% in Section 4 but rebounded to over 94% after Section 5. In comparison, the ANN achieved 100% testing accuracy in Section 4 for classifying AI. The performance discrepancy may be due to the lack of classification ability for SVM in Section 4, where the D-S evidence theory was introduced to improve AI classification accuracy. The D-S evidence theory is applied in Section 3.3.  Table 10 shows the SVM performance to detect the ND in all sections. In Section 1, the overall testing accuracy was 80.56%, related to the high sensitivity (100.00%). In Section 2, the overall testing accuracy even dropped to 78.77%. Similarly, the ANN had low testing accuracy of 74.07% in identifying ND in Section 2 ( Table 7). These all indicated the difficulties in accurate ND diagnosis within the 16 h of the disease infection. Parts of the reasons can be found in feature scores calculated by IDE in Tables 3 and 4, where the feature scores in Section 2 were relatively lower than those in other stages. Therefore, Section 2 was determined as the time point for introducing the D-S evidence theory to improve ND classification accuracy. The D-S evidence theory is applied in Section 3.3.
In Section 3, the SVM reached 100% testing accuracy. In other sections, the performance of Specificity and Sensitivity fluctuated. These could be attributed to the natural regulation of bird body temperature at different hours of the day. The body temperature was the lowest at 1 am, gradually increased until 8 am, dropped until 12 pm, reached maximum from 12 pm to 6 pm, and then dropped again until 1 am. The data collection started at 8 am in Section 1, at 4 pm in Section 2, and at 12 pm in Section 3. In Section 2, both healthy and unhealthy birds increased their body temperature, causing temperature similarities in the thermal images and subsequent poor classification performance. Meanwhile, the virus may not spread fully inside the bird's body, making the unhealthy birds indistinguishable from healthy birds. Since the SVM outperformed the ANN, it was selected for further analysis, and the confusion matrix for classifying the two diseases in all sections is presented in Figure 8. The table contains information such as sensitivity, specificity, and overall accuracy of the SVM in both the training and testing stages. In Section 4, the Sensitivity and Specificity were reported as 85.71 and 72.73%, respectively, and SVM wrongfully classified 6 Flu out of 18 as healthy and 2 healthy out of 18 as Flu. On the other hand, In Section 2, the Sensitivity and Specificity were reported as 72.73 and 85.71 percent, respectively, and SVM wrongly classified 2 ND out of 18 as healthy and 6 healthy out of 18 as ND.

ER REVIEW
13 of 17

Data Fusion Results
As mentioned earlier, whenever ANN and SVM cannot detect the disease with acceptable accuracy, the D-S was used as the data fusion stage. According to the references reviewed in this study, the performance was acceptable when its value was over 80%. The set acceptable performance varies among studies. In Section 3.2, the classifiers had problems detecting AI in Section 4 and ND in Section 2, where the D-S should be introduced. Tables 11 and 12 show the results of data fusion to identify AI in Section 4 and ND in Section 2 with Dempster-Shafer evidence theory, respectively. Based on Table 11, the specificity increased from 72.73% (Table 9) to 100%; and the sensitivity increased from 85.71% (Table 9) to 98.15%; the performance of AI classification via SVM in Section 4 has been improved with D-S evidence theory. Based on Table 13, the Sensitivity increased from 72.73% (Table 10) to 82.90%; the Specificity increased from 85.71% (Table 10) to 96.35%; and the performance of ND classification via SVM in Section 2 has been improved with D-S evidence theory.

Discussion
In general, the SVM outperformed the ANN in identifying chickens infected with AI with higher sensitivity and testing accuracy. For example, the testing accuracy of AI in Section 1 was 86.11% for SVM (Table 9) and 70.37% for ANN (Table 6). This indicates that the SVM may do a better job in alerting producers if birds were infected with AI within the first 8 h of infection, which helps producers take early intervention and reduce economic loss. SVM did better when the input was the features extracted from raw data. But that does not mean SVM outperforms ANN in any classification tasks. Model comparison and tuning are still required to determine the optimal model for specific tasks.
According to Table 7, the performance of ANN to detect ND reduced after Section 5, but this problem was solved by SVM, as indicated in Table 10. Furthermore, in Section 4, the accuracy of the ANN was very low (Table 7), while the SVM obtained over 90% testing accuracy in this section (Table 10). The low accuracy of ANN was related to the inability of this classifier to distinguish ND in Section 4. Other researchers have also confirmed that the SVM performed better than the ANN in poultry disease classification issues. Okinda et al. (2019) compared SVM and ANN for classifying Newcastle disease, and the RBF-SVM, Cubic_SVM, and ANN had 97.8%, 97.1%, and 96.9%, respectively [18]. This topic has also been confirmed in other fields like mechanical faults diagnosis. Kankar et al. (2011) compared SVM and ANN for classifying ball bearings; the SVM had 73.97% and the ANN had 71.23% [38].
Based on Tables 11 and 12, the classification performance of AI and ND in Sections 4 and 2 was improved by D-S evidence theory. These results agreed with previous research [39,40]. Banakar et al. (2016) developed an intelligent device for diagnosing avian disease based on vocalization and signal processing. In their study, the accuracy of SVM increased from 83.33% to 91.15% after using the D-S evidence theory [27]. Khazaee et al. (2012) presented data fusion methodology by using ANN and SVM classifiers to distinguish between vibration conditions of planetary gears. They showed that the accuracy of the classifiers increases by more than 14% when using D-S evidence [41].
A summary of the SVM performance to diagnose AI and ND is shown in Table 13, which was based on predicting AI in Section 4 and ND in Section 2 with D-S evidence theory. Based on the proposed method in this study, which was based on thermal images and machine learning, the AI and ND can be detected within 24 h after virus infection (Section 3). The SVM may provide an alarm about the presence of the virus even within the first 8 h (Section 1), but reporting accuracy was below 90%, which may result in False Positives and False Negatives for producers, further degrading the confidence of producers in using the product. The best way was to continuously collect the reporting results and gain more confidence before sending a valid alarm within the first 24 h.
Previous research also investigated avian disease detection via machine learning. Sadeghi et al. (2015) identified and classified the chickens infected with Clostridium Perfringens based on vocalization signals and ANN [42]. Banakar et al. (2016) diagnosed avian diseases using signal processing and SVM, and the system achieved an accuracy of 91.15% in classifying the disease within 48 h after virus infection [27]. Okinda e al. (2019) used machine vision systems to diagnose avian diseases. They infected the chickens with ND and classified the disease with a 97% accuracy on the 4th day after virus infection using RBF-SVM. The parameters of this study were optimized based on the appearance and physical characteristics of the bird body [21]. Overall, our study can provide earlier alarms of disease infection with decent accuracy than the previous studies, indicating the great potential of the combined technique (thermography for data collection and machine learning for data analytics) in this space.
Our dataset was relatively small, with 1680 images in seven sections (240 thermal images per section). However, the disease challenge experiments are typically expensive, and this study serves as the first trial for verifying the possibility of the combined techniques for disease diagnosis. More data should be collected to consolidate the results. We did not expect to develop a viable system that can be directly applied in commercial farms. It is unrealistic to infect birds with avian influenza and Newcastle disease in commercial farms to collect the data for system development, as that can damage producers' profits and create the risks of disease spreading. Therefore, most of the disease challenge experiments are run cautiously in experimental labs with strict biosecurity control, which could result in small datasets, separated bird housing, and close-distance data collection. These are all future directions for system improvement.

Conclusions
This study investigated the performance of using thermography and machine learning to classify Avian Influenza and Newcastle Disease for 14-day-old broilers. After a series of optimizations (e.g., parameter tuning and model comparison), the Support Vector Machine with Dempster-Shafer Evidence Theory outperformed the Artificial Neural Networks and successfully classified the two diseases within 24 h after virus infection, with 100% sensitivity, over 94% specificity, and over 97% testing accuracy. It is concluded that thermography combined with machine learning is a useful tool for timely disease prediction, which can be properly utilized to set early alarms and reduce producer economic losses.