Next Article in Journal
Challenges to Food Security in the Middle East and North Africa in the Context of the Russia–Ukraine Conflict
Previous Article in Journal
Comprehensive Economic Impacts of Wild Pigs on Producers of Six Crops in the South-Eastern US and California
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight Pig Face Feature Learning Evaluation and Application Based on Attention Mechanism and Two-Stage Transfer Learning

1
College of Softwares, Shanxi Agricultural University, Jinzhong 030801, China
2
College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
3
College of Animal Science, Shanxi Agricultural University, Jinzhong 030801, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(1), 156; https://doi.org/10.3390/agriculture14010156
Submission received: 13 December 2023 / Revised: 16 January 2024 / Accepted: 18 January 2024 / Published: 21 January 2024
(This article belongs to the Section Digital Agriculture)

Abstract

:
With the advancement of machine vision technology, pig face recognition has garnered significant attention as a key component in the establishment of precision breeding models. In order to explore non-contact individual pig recognition, this study proposes a lightweight pig face feature learning method based on attention mechanism and two-stage transfer learning. Using a combined approach of online and offline data augmentation, both the self-collected dataset from Shanxi Agricultural University's grazing station and public datasets underwent enhancements in terms of quantity and quality. YOLOv8 was employed for feature extraction and fusion of pig face images. The Coordinate Attention (CA) module was integrated into the YOLOv8 model to enhance the extraction of critical pig face features. Fine-tuning of the feature network was conducted to establish a pig face feature learning model based on two-stage transfer learning. The YOLOv8 model achieved a mean average precision (mAP) of 97.73% for pig face feature learning, surpassing lightweight models such as EfficientDet, SDD, YOLOv5, YOLOv7-tiny, and swin_transformer by 0.32, 1.23, 1.56, 0.43 and 0.14 percentage points, respectively. The YOLOv8-CA model’s mAP reached 98.03%, a 0.3 percentage point improvement from before its addition. Furthermore, the mAP of the two-stage transfer learning-based pig face feature learning model was 95.73%, exceeding the backbone network and pre-trained weight models by 10.92 and 3.13 percentage points, respectively. The lightweight pig face feature learning method, based on attention mechanism and two-stage transfer learning, effectively captures unique pig features. This approach serves as a valuable reference for achieving non-contact individual pig recognition in precision breeding.

1. Introduction

Pork constitutes the primary segment of China’s meat consumption market. In 2018, China faced its inaugural outbreak of African swine fever virus in pigs [1]. This highly contagious virus is characterized by an alarming mortality rate approaching 100%, and currently, there is no vaccine available for its prevention and treatment. The individual monitoring of pigs enables swift isolation of those infected, safeguarding the overall herd welfare. The advent of the intelligentization era has propelled the popularity of machine vision technology, particularly in image recognition, with widespread applications in medicine, the military, education, and beyond [2,3,4]. Nevertheless, technological research in the animal breeding industry remains somewhat limited. Facial recognition technology, as a pivotal research avenue in the realm of computer vision, has garnered substantial attention in recent years [5,6]. As animal husbandry advances and modern agriculture undergoes transformation, there is an escalating need for the effective management and monitoring of pig herds. To establish a scalable, standardized, and precise pig farming model, the integration of computer technology for the automatic and accurate identification of individual pigs is indispensable [7,8,9]. Consequently, addressing issues pertaining to the accuracy, robustness and real-time performance of pig facial recognition calls for urgent and in-depth research and innovation.
Moreover, the utilization of pig facial recognition technology in agriculture spans various dimensions. This technology enables the monitoring of individual pig facial features, facilitating the swift detection of collective health issues within pig herds. This, in turn, allows for timely intervention measures and reduces the risk of disease transmission. Pig management systems, leveraging facial recognition, can precisely identify each pig and tailor personalized feeding plans based on individual features and historical feeding data. This optimization leads to improved feed utilization, enhanced growth efficiency, and minimized feed wastage. Pig facial recognition technology is also instrumental in monitoring pig behavior, encompassing activity levels and social interactions. This insight aids in assessing the health and welfare of each pig. The analysis of facial expressions further offers the potential to identify the emotional states of pigs, providing valuable insights into their emotional well-being and responses to environmental changes [10,11,12,13,14]. In conclusion, the incorporation of pig facial recognition technology in agriculture has the potential to elevate production efficiency, enhance animal welfare, and offer comprehensive, real-time information for farm management.
Traditional animal breeding commonly relies on two identification methods for individual animals [15,16,17,18]. One approach involves using RFID, an advanced technique that requires implanting chips into animals. However, this method poses potential health risks to the animals, and RFID systems struggle to collect information simultaneously from multiple animals. Another method employs physical tags, such as ear tags and branding, but these can cause stress and have a notable impact on animal health. Moreover, interactions such as biting and rubbing among animals can result in the loss of these physical tags.
To overcome the limitations associated with traditional contact-based identification methods, the realm of individual animal recognition has witnessed the emergence of various non-contact identification approaches grounded in deep learning and computer vision [19,20,21,22]. These approaches often involve models incorporating convolutional neural networks (CNNs) and attention mechanisms, enabling swift identification of individual animals through the use of cameras and computing devices. This, in turn, enhances the accuracy and robustness of animal recognition to a certain extent. Wada et al. [23] proposed a feature space method for pig facial recognition, achieving a recognition rate of 97.9% for 16 categories by manually cropping facial images to isolate critical areas. However, this method falls short of achieving automated pig facial feature extraction. In a study by Kashiha et al. [24], Fourier descriptors with rotation and translation invariance were utilized to preserve pattern features. By drawing unique patterns on 10 pigs, they achieved a recognition accuracy of 88.7%. Si Yongsheng and colleagues [25] enhanced the YOLOv4 model by integrating the RFB-s structure into its backbone network and refining the NMS algorithm, thereby improving the recognition accuracy for cows with similar body patterns. Wang Rong and team [26] developed an open-set pig face recognition method with an integrated attention mechanism capable of identifying pigs not present in the training dataset. Xie Qiuju and associates [27] created an enhanced Densenet-CBAM model for pig face recognition, combining DenseNet with CBAM to better integrate pig facial features. Huang Luwen’s group [28] proposed the DWT-GoatNet, a sheep face recognition model merging frequency and spatial domain characteristics, which is effective in varying lighting conditions, such as daytime and nighttime. Mark F. Hansen and colleagues [29] employed a convolutional neural network (CNN) for non-invasive individual pig identification, achieving high recognition accuracy. Marsot and team [30] introduced an adaptive pig face recognition method, demonstrating that the neural network can classify pigs by extracting their facial features. Xiao Jianxing and colleagues [31] utilized an enhanced Mask R-CNN and SVM classifier for image segmentation and back feature extraction of cows, enabling non-contact and stress-free individual recognition. Wang Zhenyao et al. [32] introduced a two-stage method based on triplet margin loss for pig face recognition. In the first stage, the efficient det-d0 model is employed for pig face detection. The second stage focuses on pig face classification, utilizing six backbone classification models (ResNet-18, ResNet-34, DenseNet-121, SENSE-3, AlexNet, and VGGNet-19) and proposes an enhanced approach based on triplet margin loss. The established two-stage model achieves an average precision of 91.35% for the recognition of 28 pig faces. Xu Beibei et al. [33] combined the lightweight RetinaFace-mobilenet with ArcFace in their proposed CattleFaceNet, achieving a recognition accuracy of 91.3% and a processing time of 24 frames per second (FPS), surpassing other methods in terms of efficiency.
In summary, despite advancements in pig face recognition research, several challenges persist: (1) Complex feature extraction algorithms limit their applicability to specific tasks, hindering generalizability. (2) The data environment is straightforward, allowing for the recognition of only individual pigs in uncomplicated settings. The data often originate from the same shooting scenes, overlooking the impact of factors such as shooting perspectives, lighting, and pose variations. This limitation restricts its practical application in real-world scenarios. (3) The developed models are often large and fail to meet lightweight criteria, restricting their practical use in pig face recognition.
Therefore, the pig face recognition model presented in this study, employing an attention mechanism and two-stage transfer learning, focuses on crucial feature learning and extracts multidimensional feature information from images. Moreover, it updates the weight parameters based on the variance in the objectives, establishing a connection between feature information and task objectives. The trained two-stage model offers several advantages, including high accuracy, a lightweight design, and easy deployment. Capable of identifying individual pigs in complex cross-domain scenarios, this model serves as a reference for implementing non-contact individual pig identification in precision breeding.

2. Materials and Methods

2.1. Data Collection and Filtering

This study employs two distinct datasets. The first dataset was gathered from the pig breeding farm at Shanxi Agricultural University, situated in Taigu County, Jinzhong City, Shanxi Province. Figure 1 illustrates the natural environment of the pigsty. The breeding facility comprises three breeds of breeding pigs: Jin-Fen White, Horse-Body, and Hybrid pigs (Jin-Fen White × Horse-Body). We selected the facial features of fattening pigs from these three breeds (70–180 days old) as the subjects of our study. At three specific time intervals: 8:00–9:00 in the morning, 12:00–1:00 at noon, and 4:00–5:00 in the afternoon, we conducted multi-angle, multi-scenario image and video captures of the fattening pigs within the pig breeding farm. To ensure the practicality and authenticity of the research, no deliberate cleaning treatment was applied during the collection of facial data from the pigs.
Another portion of the experimental data is derived from the public dataset of the JD Finance Pig Face Recognition Competition. This dataset consists of 30 videos corresponding to 30 pigs, with the duration of each pig’s video set at 1 min. Using OpenCV, we performed frame-by-frame processing on the captured video stream, extracting one image every 3 frames. In total, 41,313 photos were collected. Figure 2 illustrates the pig facial sample data collected in a complex environment.
To address the challenge of excessive similarity among adjacent images in the split-frame intercepted dataset, we employed the Perceptual Hashing algorithm (p-hash) to filter out highly similar images. The p-hash algorithm generates a unique “fingerprint” string for each image, enabling the comparison of these fingerprints to determine image similarity—the closer the results—the more similar the images. This process involves transforming the image using Discrete Cosine Transform (DCT) to acquire the mean value of DCT coefficients and calculating image similarity based on these transform domain features. Moreover, even after the removal of highly similar images, the dataset still contains unusable data, such as images where the pig face is obscured, the image quality is poor, or the pig face is absent. Consequently, these data also require manual filtering.

2.2. Division of the Dataset

Following data organization, 59 pigs were identified as suitable for pig face feature learning (29 from Shanxi Agricultural University’s breeding farm and 30 from the “JD Finance” Global Data Explorer Competition public dataset). Dataset A, used to train the model, consisted of 1273 images of pigs numbered 1–21 from Shanxi Agricultural University and 1418 images of 30 pigs from the JD dataset. Dataset B, used for model testing, included 457 images of pigs numbered 22–29 from Shanxi Agricultural University.
Datasets A and B were each divided into training, validation, and test sets in a 7:1:2 ratio. The training and validation sets were utilized for model training and parameter adjustment, while the test set evaluated the model’s performance. The training set data were labeled using LabelImg (1.8.1) software, and the annotated data were stored in the VOC dataset format.

2.3. Data Augmentation

Data augmentation plays a pivotal role in improving model generalization, alleviating overfitting, expanding data volume, and bolstering model robustness. Common data augmentation techniques encompass rotation, flipping, and brightness adjustment. Rotation enables the model to learn features that remain effective at various angles, thereby enhancing invariance to rotational transformations and diminishing the risk of overfitting to training data. Flipping assists the model in comprehending and learning the symmetry of images, allowing it to adapt to different perspectives and orientations encountered in real-world scenarios. Brightness adjustment, achieved by modifying image brightness, simulates diverse lighting conditions, aiding the model in better adapting to and managing dynamic lighting changes in real-world situations. Moreover, augmenting training data through data augmentation effectively elevates model performance. Particularly in scenarios with relatively small datasets, these positive effects render data augmentation a crucial strategy for bolstering the robustness and generalization performance of deep learning models.
Data augmentation can be categorized into online and offline methods. To ensure effective model training, this study employs a combination of both online and offline data augmentation techniques. Online data augmentation involves augmenting data concurrently with model training. The advantage of this method lies in not requiring synthesized data, thereby saving storage space and offering high flexibility. Offline data augmentation involves enhancing and generating images prior to training the model. In this study, the training set of dataset A, featuring pig faces numbered 1–21 from Shanxi Agricultural University, was enriched using offline data augmentation techniques such as rotation, flipping, and brightness enhancement (Figure 3 and Figure 4).

3. Design and Training of the Identification Model

3.1. YOLOv8 Network

YOLOv8 [34,35] stands as the latest innovation in the YOLO series of object detection algorithms. It was developed by Ultralytics (Figure 5). In comparison to its predecessors, YOLOv8 exhibits significant enhancements in both speed and accuracy. Therefore, it is well-suited for real-time object detection applications.
In its backbone architecture, YOLOv8 replaces YOLOv5’s C3 with the c2f module, thereby enhancing the capture of gradient flow information while maintaining a lightweight design. It dynamically adjusts channel numbers for different model scales, resulting in a significant improvement in model performance. In the Neck section, YOLOv8 eliminates the two convolutional connection layers in the PAN-FPN upsampling stage, choosing instead to directly input feature outputs from various backbone stages into the upsampling process. Simultaneously, the C3 module is replaced with the C2f module. Figure 6 provides a structural comparison diagram of C3 and C2f.
In YOLOv8, a significant modification is observed in the Head component, transitioning from a coupled head to a decoupled head structure. This new structure comprises two distinct branches: classification and regression, each representing different features. Consequently, within the decoupled head, the number of channels differs between the classification (Cls) category branch and the box regression branch. Since the Anchor box depends on the dataset, an insufficient dataset cannot accurately represent the data’s feature distribution. As a result, YOLOv8 employs the TaskAlignedAssigner as its matching strategy for positive and negative samples.
YOLOv8’s loss function updates the gradient loss and consists of two components: classification loss (VFL Loss) and regression loss (CIOU Loss + DFL). The VFL, aiming to emphasize positive samples, utilizes Binary Cross-Entropy (BCE) for positive samples and Focal Loss (FL) for negative samples to modulate the loss. CIOU Loss calculates the Intersection over Union (IOU) between the predicted and target boxes, while DFL enables the network to rapidly concentrate on the distribution of the target’s proximity. The VFL is represented in Formula (1).
V F L ( p , q ) = q ( q log ( p ) + ( 1 q ) log ( 1 p ) ) q > 0 α p γ log ( 1 p )   q = 0
In the aforementioned equation, ‘q’ represents the label. For positive samples, ‘q’ denotes the Intersection over Union (IOU) between the bounding box (bbox) and ground truth (GT), whereas for negative samples, ‘q’ is assigned a value of 0.

3.2. Coordinate Attention

CA represents an efficient location-based attention mechanism [36]. Addressing the limitation of SE, which models channel relationships but overlooks location information, CA enhances this aspect. CA considers not just the inter-channel relationships but also incorporates direction-related location information. Additionally, CA enhances model accuracy with minimal computational overhead. Furthermore, the CA module’s flexibility and lightweight design allow for easy integration into the core modules of lightweight networks. Integrating CA into the YOLOv8 model enables the acquisition of cross-channel pig face features and long-range spatial dependencies. This approach also captures direction-aware and precise pig face positioning, thereby enhancing the focus on pig face feature learning and the extraction of multi-dimensional feature information within images. Figure 7 illustrates the structure of the CA module.

3.3. Transfer Learning

Transfer learning, a machine learning technology [37,38,39], involves transferring existing knowledge from a source domain to a target domain for acquiring new knowledge. The essence of transfer learning lies in identifying commonalities between the source and target domains. This approach can mitigate issues such as data scarcity, low training efficiency, and accuracy reduction caused by overfitting. The effectiveness of transfer learning is contingent upon the similarity among the source domain, target domain, and the specific task at hand. In this study, Dataset A serves as the source domain, while Dataset B is utilized as the target domain. The workflow diagram of this study is presented in Figure 8.
The study employs a two-stage transfer learning approach. The first stage involves training the model using the PASCAL VOC2012 dataset [40], an advanced version of the VOC2007 dataset. The VOC2012 dataset comprises 20 classes, with 11,530 images for training and validation and 27,450 annotated objects in regions of interest. The PASCAL VOC is renowned for its standardized datasets in image recognition and classification. The model was initialized using pre-trained weights derived from training on the VOC2012 dataset. The second stage focuses on training the collected pig face dataset. Transfer learning is applied to the weight file from the first stage, followed by model testing.

3.4. Model Training

3.4.1. Training Model Parameters

The model underwent training with fp16 mixed precision on the training set for 100 epochs. The initial 50 epochs constituted freeze training. Key parameters included a learning rate of 0.01, an SGD optimizer with a momentum factor of 0.937, a weight decay coefficient of 0.0005, and a batch size of 16. The learning rate was modulated using the cosine annealing decay strategy.

3.4.2. Test Platform

This study utilized a desktop computer as the platform for model training, validation, and testing. The configuration of this test platform is detailed in Table 1.

3.4.3. Evaluation Metrics

The trained models underwent testing on an identical test set. To precisely evaluate the performance of various pig face feature learning models, metrics such as precision (P), recall (R), average precision (AP), mean average precision (mAP), and harmonic mean (F1) were computed from the classification results. The corresponding formulas are detailed in Equations (2)–(6).
P = T P T P + F P
R = T P T P + F N
m A P = 1 N K = 1 N P R
F 1 = 2 P R P + R
In Formulas (2)–(5), TP represents the number of positive samples correctly identified, FP denotes the number of negative samples incorrectly identified as positive, FN indicates the number of positive samples misclassified as negative, and N reflects the total number of detected sample categories. Higher values of precision (P), recall (R), and the harmonic mean (F1) signify superior performance of the recognition model.
A P = 1 n 0 n P n
In Formula (6), ‘n’ denotes the number of epochs. ‘Pn’ represents the precision achieved after the nth epoch. The average precision (AP) is calculated by summing and averaging the precision values obtained from each epoch. AP, the average of precision values at various recall levels, corresponds to the area under the precision–recall (PR) curve, bounded by the vertical and horizontal axes. A higher AP value indicates the superior performance of the recognition model.

4. Results and Discussion

4.1. YOLOv8 Model Compared with Other Models

This study compares the results of six algorithms: EfficientDet [41,42], SSD-mobilenetv2 [43,44], YOLOv5 [45,46], YOLOv7-tiny [47,48], swin_transformer [49,50,51], and YOLOv8. Firstly, these six algorithms are all considered lightweight, which is crucial for scenarios involving embedded devices or limited computational resources. Secondly, they represent different design philosophies and performance characteristics in the field of object detection. For instance, EfficientDet is renowned for its high performance and efficient design, SSD-mobilenetv2 for its lightweight structure, YOLOv5 and YOLOv7-tiny for their balance between speed and accuracy, and YOLOv8 as the latest version in the YOLO series, allowing insights into the performance improvements of the latest technology through comparisons with its predecessors. Additionally, swin_transformer is an emerging model based on the transformer architecture, showcasing promising features in image processing. By comparing these algorithms, a comprehensive understanding of their performance differences in pig face feature learning is achieved, providing deeper insights for research and applications in pig face feature learning.
To evaluate the effectiveness of feature learning in YOLOv8 on the pig face dataset, various models, including EfficientDet, SSD-mobilenetv2, YOLOv5, YOLOv7-tiny, YOLOv8, and swin_transformer, were trained using pre-trained weights on identical datasets and test environments. During the training of each model, the alterations in mean average precision (mAP) and loss values of the validation set were carefully examined and compared.
As the number of iterations increases, the mean average precision (mAP) of the five models on the validation set shows an upward trend, while the loss values demonstrate a downward trend, eventually stabilizing within a specific range (Figure 9). The mAP of the YOLOv8 model on the validation set stabilizes at 98% (Figure 9a), surpassing the other five models. This indicates that the YOLOv8 model exhibits superior learning of pig face features and achieves a higher recognition rate. The comparative evaluation of the learning effect on pig face features by different models is presented in Table 2.
Compared to other models, YOLOv8’s F1 score exceeds that of EfficientDet, SSD, YOLOv5, and YOLOv7-tiny by 0.01, 5.13, 0.52, and 0.28 percentage points, respectively, as shown in Table 2, indicating its superior comprehensive performance. The F1 score of YOLOv8 is reduced by 0.84 compared to swin_transformer. However, the parameters of the swin_transformer model are 27.54 million, while the YOLOv8 model has parameters of 11.15 million. In comparison, the YOLOv8 model aligns more closely with the requirements of lightweight models. Simultaneously, the mAP@0.5 of the YOLOv8 model for pig face feature learning stands at 97.73%, surpassing EfficientDet, SSD, YOLOv5, YOLOv7-tiny and swin_transformer by 0.32, 1.23, 1.56, 0.43 and 0.14 percentage points, respectively. Additionally, the mAP@0.9 of YOLOv8 is 79.37%, which is 9.64, 56.7, 13.33, 9.33 and 5.05 percentage points higher compared to the aforementioned models. When the Intersection over Union (IoU) threshold is set to 0.9, YOLOv8 continues to outperform the other five models in terms of average precision. This superior performance is attributed to the replacement of the C3 structure with the C2f structure in YOLOv8’s backbone and neck, offering richer gradient flow and the adjustment of channel numbers, which collectively enhance the model’s performance. Furthermore, YOLOv8 employs a dynamic allocation strategy for positive and negative samples, proving to be more flexible and efficient than the static allocation strategies used in other models. This strategy aids the model in better-utilizing sample information, thereby enhancing training efficacy. Overall, the YOLOv8 model demonstrates higher robustness and superior generalization capabilities for pig face feature learning.
Comparative analysis reveals that the model trained using pre-training weights exhibits higher precision and superior efficacy (Table 3). Within the same dataset, models trained with pre-training weights demonstrated mAP values 1.8–11.45% higher than those trained solely with the backbone network, as shown in Table 3. This is attributed to the pre-trained model being trained on the PASCAL VOC2012 dataset, which encompasses 20 classes and over 30,000 labeled objects, thereby equipping the model with extensive feature knowledge. Consequently, when tasked with learning pig face features, this model, after fine-tuning the new target, demonstrates enhanced recognition ability and precision, exemplifying the benefits of transfer learning.

4.2. Impact of CA on the Model

To assess the impact of incorporating various attention mechanisms on the performance of pig face feature learning models, SE, CBAM, ECA and CA were integrated at identical positions in the YOLOv8 architecture and subsequently trained and tested on the same pig face dataset. The test results are presented in Table 4.
Comparative analysis of models with different attention mechanisms revealed that only the YOLOv8-CA model exhibited a 0.3 percentage point increase in mean average precision (mAP), whereas the YOLOv8-CBMA, YOLOv8-ECA, and YOLOv8-SE models all demonstrated decreases in mAP. The superior performance of the CA mechanism is attributed to its focus on cross-channel pig face feature information and acquisition of long-range spatial dependencies, coupled with its ability to capture direction-aware and precise pig face positional information. In contrast, the SE mechanism primarily concentrates on establishing interdependencies among pig face features across channels while neglecting their locational information. Conversely, the CBAM mechanism merges channel and spatial attention yet overlooks long-range dependencies.
Figure 10 displays the precision–recall (PR) maps for models incorporating various attention mechanisms.
The precision–recall (PR) curve of YOLOv8-CA encompasses those of the other three attention mechanisms, signifying that YOLOv8-CA outperforms YOLOv8-CBAM, YOLOv8-ECA, and YOLOv8-SE, as illustrated in Figure 10. Notably, the area under the YOLOv8-CA curve is substantially larger than that of the YOLOv8 curve. This indicates that incorporating the CA mechanism into YOLOv8 enhances the model’s performance and increases its average precision in recognition tasks.

4.3. Analysis of Transfer Learning Results

In trials involving eight pigs (Pigcs1–Pigcs8), a comparison was conducted among the backbone network (Nopre-pigcs), pre-trained weights (pigcs), and our two-stage trained weights (pre-pigcs). The mAP values were 84.81% for Nopre-pigcs, 92.60% for pigcs, and 95.73% for pre-pigcs, respectively. The two-stage model exhibited a 10.92% and 3.13% higher mAP value compared to the backbone network and pre-trained weights models, respectively, indicating enhanced recognition precision. Figure 11 displays the AP values for each pig when tested with the different models. Cs3 exhibited the largest increase in AP values, from 85.5% to 95.6%, while cs8 showed the smallest increase, with all AP values at 87.65%. The two-stage model demonstrated notable stability and generalizability.
Figure 12 illustrates the confusion matrix of the two-stage model, with axis labels representing different pig IDs and matrix entries reflecting the count of recognized pig faces. For instance, a “1” in the matrix’s third column and second row signifies that the model once misidentified cs7 as cs3. The test results indicate that the model accurately recognizes most pig faces. Cs7 exhibited the poorest recognition performance, being most frequently misidentified as other pigs.
Compared to the two-stage model, other models exhibited errors in pig face feature learning, including misidentifying pig cs2 as cs1 (top of Figure 13), failing to detect pig cs3 (middle of Figure 13), and mistaking pig cs7 for cs8 (bottom of Figure 13). Factors like uncleaned pig faces or side views may contribute to misdetections and non-detections. The two-stage model significantly mitigates these issues, enhancing recognition accuracy for pigs with facial obstructions or side profiles.

5. Conclusions and Future Directions

This study employs pig face photos from pig farms and videos from JD Finance’s pig face recognition competition as datasets. These datasets undergo screening via d-hash and are enhanced using both online and offline data augmentation methods. Various models and transfer learning approaches are employed to evaluate the learning efficacy of pig face features, leading to the following conclusions:
  • The pig face feature recognition model built on YOLOv8 demonstrates an impressive recognition accuracy of 97.73% on the test set, with 11.155 million parameters, an average detection speed of 13.032 milliseconds, and a model size of 42.7 million. YOLOv8 stands out for its concise parameter count, minimal computational requirements, compact model size, and exceptional performance, making it a valuable benchmark for upcoming endeavors in lightweight, non-contact, intelligent pig face feature recognition;
  • Without modifying the model structure, four attention mechanisms—CA, CBAM, SE, and ECA—were integrated. Among these, CA emerged as the most effective in augmenting the pig face feature recognition model’s performance. The inclusion of CA led to a 0.3 percentage point enhancement in the model’s mAP value on the test set. These findings indicate that the attention module, particularly CA, intensifies the focus on crucial pig face features, thereby enhancing the model’s efficiency in feature recognition;
  • This study introduces a two-stage transfer learning approach to enhance the accuracy of pig face recognition models in challenging environments. Initially, a YOLOv8 network was pre-trained through a two-stage transfer learning process on the Pascal VOC2012 dataset. Subsequently, the feature extraction network underwent fine-tuning on dataset A, resulting in a task-specific pre-trained network enriched with pertinent features for pig face recognition. Finally, this task-specific network underwent additional fine-tuning on dataset B, yielding the ultimate pig face recognition model. The refined model demonstrated precise recognition of eight test pigs, achieving an impressive mAP value of 95.73%. Particularly noteworthy is the improvement in the AP value for cs4 from 85.5% to 95.6%, showcasing robust performance in the presence of noise, occlusion, and other disturbances.
Pig facial recognition technology has exhibited significant potential across diverse applications in the livestock industry, including precision breeding, disease detection, automated feeding systems, and individual behavioral analysis. The precise identification of individual pigs facilitates meticulous breeding management, thereby elevating breeding efficiency and overall profitability. Additionally, the application of pig facial recognition in disease detection allows for early identification, effectively curbing the spread of diseases within the livestock population. The implementation of automated feeding systems, integrating pig facial recognition for personalized feeding plans, contributes to heightened feeding efficiency and a reduction in feed wastage. Moreover, the technology’s capability to analyze individual behaviors offers robust support for behavioral studies and welfare assessments, actively advancing the digital transformation and intelligent management of the livestock industry.
In future research, our primary focus will center on the seamless integration of pig facial feature learning technology with performance data and DNA genetic information. This involves creating a distinct identity for each pig using pig facial feature learning technology, establishing a comprehensive database management system, and gathering diverse data, including health status, reproductive performance, DNA sequence data, etc. By amalgamating these data sources, we aim to ensure accurate tracking of performance data and genetic information for each pig, providing a more reliable means for confirming individual pig identities. This approach not only assists in the precise selection of pigs with desirable genetic traits but also offers novel perspectives and tools for animal breeding and health management, enhancing the scientific foundation for decision making. Additionally, to address common occlusion challenges in practical scenarios, we plan to explore and introduce innovative methods and models. Combining multimodal data, such as visible light images, infrared images, and depth images, will enhance the robustness of recognition models against occlusion scenarios. Simultaneously, leveraging generative adversarial networks to generate missing parts of occluded facial features, coupled with applying self-attention mechanisms to learn the correlation between different facial regions, is expected to significantly improve recognition performance in occlusion scenarios. The comprehensive application of these methods, alongside the latest deep learning technologies, aims to achieve outstanding results for pig facial recognition technology in complex scenarios involving occlusion. Finally, lightweight models often achieve efficient inference by reducing parameter quantity and model complexity, potentially leading to a decrease in accuracy. Especially in complex real-world scenarios, lightweight models may not adapt as well to factors such as occlusion, lighting changes, and diverse environments as larger models. This is particularly crucial in pig facial recognition, where environments like farms may have varying conditions. We will investigate methods such as optimizing model structures, improving training strategies, and introducing adaptive mechanisms to allow models to dynamically adjust their parameters in response to changing environments. These ongoing research efforts and innovations aim to ensure that pig facial recognition technology operates stably and efficiently in various application scenarios, achieving intelligent and precise management in the breeding industry and making significant contributions to animal welfare and public health safety. Furthermore, it injects new vitality into the sustainable development of the agricultural industry chain in the era of digitalization in animal husbandry.

Author Contributions

Conceptualization, Z.G.; methodology, Z.Y. and M.P.; model training, M.P.; results analysis, Z.Y.; investigation, Y.Z.; data curation, W.Z.; writing—original draft preparation, Z.Y.; writing—review and editing, W.Z., F.L. and X.G.; visualization, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Shanxi Province Breeding Joint Research Project (YZGG-06), Integration and Demonstration Promotion of Efficient Pig Farming Technologies (CYYL23-23).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Due to the sensitivity and confidentiality of the data, this study does not provide the original data when publishing the paper. For data acquisition, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Han, L.; Wang, S. China’s Pork Supply and Demand Situation in 2019 and Outlook for 2020. Agric. Outlook 2020, 16, 7–11+17. [Google Scholar]
  2. Kolhar, S.; Jagtap, J. Plant trait estimation and classification studies in plant phenotyping using machine vision—A review. Inf. Process. Agric. 2023, 10, 114–135. [Google Scholar] [CrossRef]
  3. Bao, J.; Xie, Q. Artificial intelligence in animal farming: A systematic literature review. J. Clean. Prod. 2022, 331, 129956. [Google Scholar] [CrossRef]
  4. Kulkarni, R.; Di Minin, E. Towards automatic detection of wildlife trade using machine vision models. Biol. Conserv. 2023, 279, 109924. [Google Scholar] [CrossRef]
  5. Xie, Q.; Zhou, H.; Bao, J.; Li, Q. Review on Machine Vision-based Weight Assessment for Livestock and Poultry. Trans. Chin. Soc. Agric. Mach. 2022, 53, 1–15. [Google Scholar]
  6. Wang, P.; Liu, N.; Qiao, J. Application of machine vision image feature recognition in 3D map construction. Alex. Eng. J. 2023, 64, 731–739. [Google Scholar] [CrossRef]
  7. Liu, F.; Wu, W.; Liu, X.; Wang, X.; Fang, Y.; Li, G.; Du, X. Research Progress of Computer Vision and Deep Learning in Pig Recognition. J. Huazhong Agric. Univ. 2023, 42, 47–56. [Google Scholar] [CrossRef]
  8. Li, X.; Li, H. Pig Face Landmark Detection Method Based on Convolutional Neural Network. J. Jilin Univ. (Sci. Ed.) 2022, 60, 609–616. [Google Scholar] [CrossRef]
  9. Congdon, J.V.; Hosseini, M.; Gading, E.F.; Masousi, M.; Franke, M.; MacDonald, S.E. The Future of Artificial Intelligence in Monitoring Animal Identification, Health, and Behaviour. Animals 2022, 12, 1711. [Google Scholar] [CrossRef]
  10. Kounalakis, T.; Triantafyllidis, A.G.; Nalpantidis, L. Deep learning-based visual recognition of rumex for robotic precision farming. Comput. Electron. Agric. 2019, 165, 104973. [Google Scholar] [CrossRef]
  11. Xiong, B.; Fu, R.; Lin, Z.; Luo, Q.; Yang, L. Identification of swine individuals and construction of traceability system under free-range pig-rearing system. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2009, 25, 98–102. [Google Scholar]
  12. Pedro, R.F. The Impacts of Animal Farming: A Critical Overview of Primary School Textbooks. J. Agric. Environ. Ethics 2022, 35, 12. [Google Scholar]
  13. Fan, P.; Yan, B. Research on the Application of New Technologies and Products in Smart Farming. Anim. Husb. Poult. 2023, 34, 36–38. [Google Scholar] [CrossRef]
  14. Ma, C. Research on Pig Face Recognition Algorithm Based on Deep Learning. Northeast. Agric. Univ. 2023. [Google Scholar]
  15. Ning, Y.; Yang, Y.; Li, Z.; Wu, X.; Zhang, Q. Detecting and counting pig number using improved YOLOv5 in complex scenes. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 168–175. [Google Scholar] [CrossRef]
  16. Adrion, F.; Kapun, A.; Eckert, F.; Holland, E.-M.; Staiger, M.; Götz, S.; Gallmann, E. Monitoring trough visits of growing-finishing pigs with UHF-RFID. Comput. Electron. Agric. 2018, 144, 144–153. [Google Scholar] [CrossRef]
  17. Maselyne, J.; Saeys, W.; Ketelaere, D.B.; Mertens, K.; Vangeyte, J.; Hessel, E.F.; Millet, S.; Van Nuffel, A. Validation of a High Frequency Radio Frequency Identification (HF RFID) system for registering feeding patterns of growing-finishing pigs. Comput. Electron. Agric. 2014, 102, 10–18. [Google Scholar] [CrossRef]
  18. Ning, J.; Lin, J.; Yang, S.; Wang, Y.; Lan, X. Face Recognition Method of Dairy Goat Based on Improved YOLO v5s. Trans. Chin. Soc. Agric. Mach. 2023, 54, 331–337. [Google Scholar]
  19. Ferreira, R.E.P.; Tiago, B.; Rosa, G.J.M.; Dórea, J.R.R. Using dorsal surface for individual identification of dairy calves through 3D deep learning algorithms. Comput. Electron. Agric. 2022, 201, 107272. [Google Scholar] [CrossRef]
  20. Lu, J.; Wang, W.; Zhao, K.; Wang, H. Recognition and segmentation of individual pigs based on Swin Transformer. Anim. Genet. 2022, 53, 794–802. [Google Scholar] [CrossRef]
  21. Wen, C.; Zhang, X.; Wu, J.; Yang, C.; Li, Z.; Shi, L.; Yu, H. Pig facial expression recognition using multi-attention cascaded LSTM model. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2021, 37, 181–190. [Google Scholar] [CrossRef]
  22. Zhu, W.-X.; Guo, Y.-Z.; Jiao, P.-P.; Ma, C.-H.; Chen, C. Recognition and drinking behaviour analysis of individual pigs based on machine vision. Livest. Sci. 2017, 205, 129–136. [Google Scholar] [CrossRef]
  23. Wada, N.; Shinya, M.; Shiraishi, M. [Short Paper] Pig Face Recognition Using Eigenspace Method. ITE Trans. Media Technol. Appl. 2013, 1, 328. [Google Scholar] [CrossRef]
  24. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.H.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic identification of marked pigs in a pen using image pattern recognition. Comput. Electron. Agric. 2013, 93, 111–120. [Google Scholar] [CrossRef]
  25. Si, Y.; Xiao, J.; Liu, G.; Wang, K. Individual Identification Method of Lying Cows Based on MSRCP and Improved YOLO v4 Model. Trans. Chin. Soc. Agric. Mach. 2023, 54, 243–250,262. [Google Scholar]
  26. Wang, R.; Gao, R.; Li, Q.; Liu, S.; Yu, Q.; Feng, L. Open-set Pig Face Recognition Method Combining Attention Mechanism. Trans. Chin. Soc. Agric. Mach. (Trans. CSAE) 2023, 54, 256–264. [Google Scholar]
  27. Xie, Q.; Wu, M.; Bao, J.; Yin, H.; Liu, H.; Li, X.; Zheng, P.; Liu, W.; Chen, G. Individual pig face recognition combined with attention mechanism. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 180–188. [Google Scholar] [CrossRef]
  28. Huang, L.; Qian, B.; Guan, F.; Hou, Z.; Zhang, Q. Goat Face Recognition Model Based on Wavelet Transform and Convolutional Neural Networks. Trans. Chin. Soc. Agric. Mach. 2023, 54, 278–287. [Google Scholar]
  29. Hansen, M.F.; Smith, M.L.; Smith, L.N.; Salter, M.G.; Baxter, E.M.; Farish, M.; Grieve, B. Towards on-farm pig face recognition using convolutional neural networks. Comput. Ind. 2018, 98, 145–152. [Google Scholar] [CrossRef]
  30. Marsot, M.; Mei, J.; Shan, X.; Ye, L.; Feng, P.; Yan, X.; Li, C.; Zhao, Y. An adaptive pig face recognition approach using Convolutional Neural Networks. Comput. Electron. Agric. 2020, 173, 105386. [Google Scholar] [CrossRef]
  31. Xiao, J.; Liu, G.; Wang, K.; Si, Y. Cow identification in free-stall barns based on an improved Mask R-CNN and an SVM. Comput. Electron. Agric. 2022, 194, 106738. [Google Scholar] [CrossRef]
  32. Wang, Z.; Liu, T. Two-stage method based on triplet margin loss for pig face recognition. Comput. Electron. Agric. 2022, 194, 106737. [Google Scholar] [CrossRef]
  33. Xu, B.; Wang, W.; Guo, L.; Chen, G.; Li, Y.; Cao, Z.; Wu, S. CattleFaceNet: A cattle face identification approach based on RetinaFace and ArcFace loss. Comput. Electron. Agric. 2022, 193, 106675. [Google Scholar] [CrossRef]
  34. Kang, J.; Zhao, L.; Wang, K.; Zhang, K. Research on an Improved YOLOV8 Image Segmentation Model for Crop Pests. Adv. Comput. Signals Syst. 2023, 7, 1–8. [Google Scholar]
  35. Jose, M.-C.; Hernández-Farías, D.I.; Rojas-Perez, L.O.; Cabrera-Ponce, A.A. Language meets YOLOv8 for metric monocular SLAM. J. Real-Time Image Process. 2023, 20, 59. [Google Scholar]
  36. Sun, L.; Wang, X.; Wang, B.; Wang, J.; Meng, X. Identification Method of Fish Satiation Level Based on ResNet-CA. Trans. Chin. Soc. Agric. Mach. 2022, 53, 219–225. [Google Scholar]
  37. Singh, R.; Ahmed, T.; Singh, R.; Udmale, S.S.; Udmale, S.S. Identifying tiny faces in thermal images using transfer learning. J. Ambient Intell. Humaniz. Comput. 2019, 11, 1–10. [Google Scholar] [CrossRef]
  38. Zhang, G.; Lyu, Z.; Liu, H.; Liu, W.; Long, C.; Huang, C. Model for identifying lotus leaf pests and diseases using improved DenseNet and transfer learning. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2023, 39, 188–196. [Google Scholar] [CrossRef]
  39. Chen, L.; Li, W.; Feng, D.; Wu, H.; Wang, K. Transfer Learning-Based Image Recognition of Nitrogen and Potassium Nutrient Stress in Rice. Rice Sci. 2023, 30, 100–103. [Google Scholar]
  40. Everingham, M.; Eslami SM, A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes challenge: A retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  41. Chen, F.; Zhang, X.; Zhu, X.; Li, Z.; Lin, J. Detection of the olive fruit maturity based on improved EfficientDet. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 158–166. [Google Scholar] [CrossRef]
  42. Jia, J.; Fu, M.; Liu, X.; Zheng, B. Underwater Object Detection Based on Improved EfficientDet. Remote Sens. 2022, 14, 4487. [Google Scholar] [CrossRef]
  43. Zhang, L.; Zhou, S.; Li, N.; Zhang, Y.; Chen, G.; Gao, X. Apple Location and Classification Based on Improved SSD Convolutional Neural Network. Trans. Chin. Soc. Agric. Mach. 2023, 54, 223–232. [Google Scholar]
  44. Liu, Q.; Dong, L.; Zeng, Z.; Zhu, W.; Zhu, Y.; Meng, C. SSD with multi-scale feature fusion and attention mechanism. Sci. Rep. 2023, 13, 21387. [Google Scholar] [CrossRef]
  45. Huang, W.; Huo, Y.; Yang, S.; Liu, M.; Li, H.; Zhang, M. Detection of Laodelphax striatellus (small brown planthopper) based on improved YOLOv5. Comput. Electron. Agric. 2023, 206, 107657. [Google Scholar] [CrossRef]
  46. Li, G.; Zha, W.; Chen, C.; Shi, G.; Gu, L.; Jiao, J. Pig Face Recognition and Detection Method Based on Improved YOLOv5s. Southwest China J. Agric. Sci. 2023, 36, 1346–1356. [Google Scholar] [CrossRef]
  47. Wang, J.; Zhou, J.; Zhang, Y.; Hu, H. Multi-pose dragon fruit detection system for picking robots based on the optimal YOLOv7 model. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2023, 39, 276–283. [Google Scholar] [CrossRef]
  48. Wen, C.; Guo, H.; Li, J.; Hou, B.; Huang, Y.; Li, K.; Nong, H.; Long, X.; Lu, Y. Application of improved YOLOv7-based sugarcane stem node recognition algorithm in complex environments. Front. Plant Sci. 2023, 14, 1230517. [Google Scholar]
  49. Wang, C.; Liu, H.; An, X.; Gong, Z.; Deng, F. SwinCrack: Pavement crack detection using convolutional swin-transformer network. Digit. Signal Process. 2024, 145, 104297. [Google Scholar] [CrossRef]
  50. Yang, S.; Wang, W.; Gao, S.; Deng, Z. Strawberry ripeness detection based on YOLOv8 algorithm fused with LW-Swin Transformer. Comput. Electron. Agric. 2023, 215, 108360. [Google Scholar] [CrossRef]
  51. Xia, Y.; Luo, C.; Zhou, Y.; Jia, L. Lightweight TFT-LCD Panel Defect Classification Algorithm Based on Swin Transformer. Opt. Precis. Eng. 2023, 31, 3357–3370. [Google Scholar] [CrossRef]
Figure 1. The natural environment of the pigsty.
Figure 1. The natural environment of the pigsty.
Agriculture 14 00156 g001
Figure 2. Pig-face samples in complex environments.
Figure 2. Pig-face samples in complex environments.
Agriculture 14 00156 g002
Figure 3. Effects of data augmentation.
Figure 3. Effects of data augmentation.
Agriculture 14 00156 g003
Figure 4. Before-and-after comparison of the number of data augmentation.
Figure 4. Before-and-after comparison of the number of data augmentation.
Agriculture 14 00156 g004
Figure 5. YOLOv8 network structure.
Figure 5. YOLOv8 network structure.
Agriculture 14 00156 g005
Figure 6. Structural comparison diagram of C3 and C2f.
Figure 6. Structural comparison diagram of C3 and C2f.
Agriculture 14 00156 g006
Figure 7. CA module structure.
Figure 7. CA module structure.
Agriculture 14 00156 g007
Figure 8. Workflow diagram.
Figure 8. Workflow diagram.
Agriculture 14 00156 g008
Figure 9. Curve of mAP and loss on validation set during training of different models (a) is curve of mAP. (b) is curve of loss.
Figure 9. Curve of mAP and loss on validation set during training of different models (a) is curve of mAP. (b) is curve of loss.
Agriculture 14 00156 g009
Figure 10. PR graphs with different attention mechanisms are introduced.
Figure 10. PR graphs with different attention mechanisms are introduced.
Agriculture 14 00156 g010
Figure 11. AP values for each pig using different methods.
Figure 11. AP values for each pig using different methods.
Agriculture 14 00156 g011
Figure 12. Confusion matrix for two-stage model.
Figure 12. Confusion matrix for two-stage model.
Agriculture 14 00156 g012
Figure 13. False recognition results of pig faces compared with two-stage models ((a,c,e) is the two-stage model correct recognition result graph, (b,d,f) is other model error recognition result map).
Figure 13. False recognition results of pig faces compared with two-stage models ((a,c,e) is the two-stage model correct recognition result graph, (b,d,f) is other model error recognition result map).
Agriculture 14 00156 g013
Table 1. Test platform configuration.
Table 1. Test platform configuration.
ConfigurationParameters
Operating SystemWindows11
CPUIntel(R)Core(TM)I7-13700KF
Memory64 G
GPUNVIDIA GeForce RTX 4080
Python3.8
Deep Learning FrameworkPytorch 1.12.1
Table 2. Evaluation of the learning effect of pig face features by different models.
Table 2. Evaluation of the learning effect of pig face features by different models.
ModelsmAP@0.5
/%
mAP@0.9
/%
Precision
/%
Recall
/%
F1
/%
Parameters
/M
Floating Point
/G
Velocity
/ms
Efficientdet97.4169.7397.0695.3495.9211.9849.3084.91
SSD96.5022.6797.5986.2590.8038.9064.7517.35
Yolov596.1766.0496.3894.7995.417.1616.3715.82
Yolov7-tiny97.3070.0495.4396.1295.656.1513.6112.55
Yolov897.7379.3796.1196.1095.9311.1625.7513.03
swin_transformer97.5974.3297.7497.8196.7727.548.7451.42
Table 3. Comparison of the effects of using backbone network and using pre-trained weights on pig face feature.
Table 3. Comparison of the effects of using backbone network and using pre-trained weights on pig face feature.
EfficientDetSSDYOLOv5YOLOv7-TinyYOLOv8Swin_Transformer
Backbone network88.50%85.05%92.40%95.80%95.93%87.76%
Pre-training weights97.41%96.50%96.17%97.30%97.73%97.59%
Table 4. Comparison of parameters of pig face feature learning results with different attention mechanism models.
Table 4. Comparison of parameters of pig face feature learning results with different attention mechanism models.
ModelsmAP@0.5
/%
Precision
/%
Recall
/%
F1
/%
Yolov897.7396.1196.1095.93
Yolov8-CA98.0396.1695.3095.37
Yolov8-CBAM97.6796.1896.2195.90
Yolov8-ECA92.8990.8085.3586.90
Yolov8-SE97.5796.0097.0296.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, Z.; Peng, M.; Guo, Z.; Zhao, Y.; Li, Y.; Zhang, W.; Li, F.; Guo, X. Lightweight Pig Face Feature Learning Evaluation and Application Based on Attention Mechanism and Two-Stage Transfer Learning. Agriculture 2024, 14, 156. https://doi.org/10.3390/agriculture14010156

AMA Style

Yin Z, Peng M, Guo Z, Zhao Y, Li Y, Zhang W, Li F, Guo X. Lightweight Pig Face Feature Learning Evaluation and Application Based on Attention Mechanism and Two-Stage Transfer Learning. Agriculture. 2024; 14(1):156. https://doi.org/10.3390/agriculture14010156

Chicago/Turabian Style

Yin, Zhe, Mingkang Peng, Zhaodong Guo, Yue Zhao, Yaoyu Li, Wuping Zhang, Fuzhong Li, and Xiaohong Guo. 2024. "Lightweight Pig Face Feature Learning Evaluation and Application Based on Attention Mechanism and Two-Stage Transfer Learning" Agriculture 14, no. 1: 156. https://doi.org/10.3390/agriculture14010156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop