1. Introduction
Accurate identification of individual animals is fundamental in modern livestock management, enabling effective monitoring of health, behavior [
1], and welfare and optimizing farm productivity [
2]. Animal face recognition [
3] has emerged as a non-invasive and efficient method for individual identification, playing a crucial role in precision livestock farming. By automating the identification process, farmers can reduce labor costs, improve record-keeping accuracy, and enhance animal welfare. Traditional identification methods, such as ear tags, tattoos, and radio-frequency identification (RFID) devices, have been widely used in the livestock industry [
4]. RFID technology, in particular, offers advantages like contactless identification and data storage capabilities. However, it also has notable drawbacks. The initial cost of RFID systems can be high, and the tags may require regular maintenance or replacement due to damage or loss [
5]. Additionally, the process of tagging can cause stress and discomfort to the animals, raising concerns about animal welfare. RFID systems can also be susceptible to interference and may not perform reliably in all farm environments.
In recent years, deep learning-based human face recognition technologies [
6] have achieved remarkable success, demonstrating high accuracy and robustness under various conditions. These advancements are largely attributed to the ability of deep convolutional neural networks (CNNs) [
7] to automatically learn discriminative features from large datasets. Applying deep learning techniques to animal face recognition [
8,
9,
10] holds significant promise, offering the potential to overcome the limitations of traditional methods by providing accurate, efficient, and non-invasive identification. However, pig face recognition presents unique challenges such as non-frontal, extreme head poses, or occluded faces due to grazing or rooting behavior. Due to these challenges, it is hard to recognize pig faces compared to human face recognition.
One significant challenge is the collection of pig face data. Pigs often spend considerable time lying down, making it difficult for cameras to capture clear images of their faces. This behavior necessitates sifting through large volumes of data to extract useful pig face images, which is both time-consuming and labor-intensive [
11]. Furthermore, pig face data is affected by several factors that complicate recognition tasks. Pigs have highly similar facial features, leading to high intra-class similarity, which makes it difficult for models to distinguish between individuals. Variations in lighting conditions, complex backgrounds within pig pens, and unrestricted movement lead to significant differences in captured images. The pig’s movements significantly affect the distance between its face and the camera, resulting in noticeable inconsistencies in the size and resolution of pig face images. Pigs often have dirty faces due to their living habits, with mud or debris covering parts of their faces. This introduces occlusions and additional variability, even among images of the same individual, making feature extraction and recognition more challenging.
Compared to cow and sheep face datasets [
12,
13], pig face datasets are more difficult to collect and process. The behaviors of pigs result in higher variability and noise in the data, necessitating more sophisticated data processing and model training techniques. Pig face detection [
14] is also a non-trivial task. Studies have shown that including background in pig face images can negatively impact recognition accuracy because background elements may introduce noise and distract the recognition model from focusing on relevant facial features [
15]. Some earlier studies addressed this by manually cropping pig face images to exclude the background [
16]. However, manual cropping is impractical for large-scale applications due to the significant labor required.
To overcome these data collection and preprocessing challenges, recent approaches have developed end-to-end pig face detection and recognition systems to automate this process [
17]. While these methods have improved efficiency, they still face limitations in dynamic farm environments. Specifically, methods that achieve high recognition accuracy often require retraining and re-labeling when new pigs are introduced. This retraining process increases workload and limits scalability, making such systems less suitable for environments where pig populations frequently change due to the introduction of new piglets and the removal of sick or old pigs. In practical applications, Closed-Set pig recognition systems [
18] cannot adapt to these changes without retraining, hindering continuous monitoring and management. Unidentified pigs cannot be tracked or assessed effectively, limiting the utility of such systems in real-world farm operations.
To address these challenges, we focus on the task of pig face Open-Set recognition (PFOSR) [
19], which aims to accurately identify known pigs while effectively handling unknown individuals that the model has not encountered during training. PFOSR is essential in dynamic farm environments, ensuring that new pigs can be recognized. A prominent challenge in PFOSR is the high intra-class similarity among pig faces and the absence of unknown individuals during training. Existing methods often rely solely on metric learning loss functions [
20], such as ArcFace Loss [
21], to enhance inter-class separability by pushing features of different classes apart in the feature space. While effective for distinguishing between classes, relying exclusively on metric learning may not adequately capture intra-class variations, especially when dealing with high variability due to environmental factors and occlusions. Conversely, classification loss functions like Center Loss [
22] focus on promoting intra-class compactness by pulling features of the same class together. However, using only classification loss may not sufficiently separate different classes, which is crucial for handling unknown individuals in Open-Set scenarios. To overcome these limitations, we propose a novel three-stage pig face detection, recognition, and registration system designed to operate effectively in dynamic farm environments and address the challenges and complexities involved in PFOSR.
The first stage is the pig face detection and recognition training stage. We build and train the pig face detection and recognition modules in the first phase. A robust pig face detection model is developed to accommodate the variability in pig face appearances under real farm conditions. Using a small, labeled dataset, this detection model learns to accurately identify and localize pig faces in raw images, separating them from cluttered backgrounds. Such a decoupled approach alleviates the need for manual cropping on new, large datasets, as the detection model can be applied to automatically extract pig faces for subsequent steps. Within the same phase, we also train a high-performance pig face recognition model, starting from a modified Vision Transformer (ViT) architecture [
23]. The original MLP layer is replaced with a new Embedding layer, and a dual-loss structure—combining Sub-center ArcFace loss [
24] and Center Loss—is introduced. By simultaneously increasing inter-class separability and reducing intra-class distance, this strategy enhances the extractor’s discriminative power. The improved model effectively handles subtle differences among pig faces, delivering robust accuracy in both Closed-Set and Open-Set recognition contexts.
The second stage is the Known Pig Registration Phase. After the detection and feature extraction components have been fully trained, the second phase focuses on registering pigs already known to the farm. Known pig images pass through the pig face detection module, and their features are extracted by the trained model. These feature vectors are then stored in a Pig Face Feature Gallery, along with the corresponding pig IDs. This registration mechanism ensures that, for each known pig, representative embeddings exist in the database, reflecting variations in pose, lighting, and other farm conditions. By capturing sufficient diversity in each pig’s features, the gallery becomes a reliable reference for identifying these individuals during subsequent recognition tasks.
The final stage is the Unknown and Known Pig Recognition and Registration Phase. In the final phase, the system manages new incoming images—either from previously registered pigs or from unrecognized pigs. First, the pig face detection module extracts face regions, and the trained feature extractor generates the associated embeddings. These embeddings are then compared against the existing face gallery using a similarity measure (e.g., cosine distance [
25]). If a query embedding’s best match exceeds a predefined similarity threshold, the system concludes that the pig is already known, mapping the embedding to an existing ID. Otherwise, the pig is considered unknown: the system creates a new ID and stores its feature vector in the gallery. This dynamic updating mechanism ensures that the pipeline adapts to Open-Set scenarios where new pigs may appear over time, maintaining a relevant and up-to-date gallery of farm animals. By continuously refining its database in this manner, the system can effectively handle both routine farm management tasks and the challenges posed by evolving, large-scale livestock populations.
During inference time, our pig face detection and recognition models automatically detect pig faces and extract their feature vectors. In the Pig Face Feature Matching module, the extracted features are compared with those in the Pig Face Feature Gallery using the distance similarity algorithm. The ID corresponding to the feature with the highest similarity score is assigned to the test sample. This enables accurate recognition and seamless registration of new pigs, enhancing operational efficiency and animal welfare by facilitating accurate and non-invasive monitoring of individual pigs.
Our key contributions are as follows:
We propose a decoupled pig face detection, recognition, and registration system that reduces manual effort and improves recognition accuracy by focusing on relevant facial features.
We introduce a dynamic registration mechanism that allows the system to adapt to changes in the pig population without retraining, addressing the Open-Set recognition challenge inherent in PFOSR.
We design a dual-loss structure that combines metric learning loss and classification loss during training to enhance the discriminative power of the feature extractor. This dual-loss structure captures subtle differences among pig faces while maintaining robustness to intra-class variations, significantly improving recognition accuracy in both Closed-Set and Open-Set scenarios.
We create a comprehensive pig face dataset comprising a detection dataset, a recognition dataset, a side-face dataset, and a pig face gallery. This dataset facilitates the development of robust detection and recognition models for pig face identification tasks.
By integrating advanced detection and recognition techniques with a dynamic registration mechanism and our proposed dual-loss structure during training, our system effectively addresses the key challenges in pig face Open-Set recognition. This approach not only enhances operational efficiency but also contributes to improved animal welfare by facilitating accurate and non-invasive monitoring of individual pigs.
2. Materials and Methods
The proposed pig face detection and recognition system is designed to effectively handle dynamic farm environments through Pig Face Open-Set Recognition (PFOSR). The system operates through three primary stages: the known pig face detection and recognition training stage and the second is known pig registration. The third stage comprises unknown pig recognition, registration, and Face Gallery Updation, as illustrated in
Figure 1.
2.1. System Overview
Our proposed Pig Face Open-Set Recognition (PFOSR) system operates in three sequential phases to achieve robust and adaptable pig identification: The training Phase, the Known Pig Registration Phase, and the Unknown and Known Pig Recognition and Registration Phase. In the Training Phase, we build and train high-accuracy pig face detection and recognition models on labeled datasets. The detection model isolates pig faces from cluttered scenes, while the recognition model extracts discriminative features for each identified pig. Once both models are trained, the Known Pig Registration Phase constructs and maintains a feature gallery for already recognized pigs. During this stage, embedding vectors for each known pig are extracted using the trained detection–recognition pipeline and stored in a dynamic repository, allowing new reference data to be incorporated without requiring model retraining. Finally, in the Unknown and Known Pig Recognition and Registration Phase, incoming pig images are passed through the same detection–recognition process and matched against the gallery. If a pig’s features are not found in the gallery, the system automatically assigns a new identity and updates the repository to accommodate the newly encountered animal. This pipeline ensures continuous, real-time pig identification under Open-Set conditions.
Figure 2 illustrates the overall architecture. This three-stage pipeline provides a comprehensive solution for pig identification and management, enabling effective livestock monitoring and adapting to changes in the pig population. By adopting this robust method for the advanced detection, recognition, and registration process of pigs, the system lays a reliable foundation for future deployment in farm environments.
2.2. Dataset
In this study, we constructed several datasets to train and evaluate our pig face detection and recognition system. These datasets are designed to capture the variability in pig appearances and environmental conditions, ensuring the robustness and generalization ability of our models.
Small-Scale Pig Face Detection Dataset. To enhance the robustness of our pig face detection model and minimize manual annotation efforts for future datasets, we developed a high-quality pig face detection dataset. This dataset comprises 1500 images collected from 20 selected pigs, captured under diverse environmental conditions, lighting, and angles. The images were collected from real farm environments where pigs are housed in enclosures, resulting in some pig faces being partially obscured by bars. During annotation, we included these obstructions within the bounding boxes, as removing them was not feasible. Additionally, the dataset includes various perspectives, such as side and frontal views, adding to its complexity. Each image underwent meticulous manual annotation using the LabelMe tool by trained annotators to ensure precision and consistency. A detailed visualization of the annotation process can be found in the
Supplementary Material (Figure S1). The dataset was systematically divided into training, validation, and testing subsets in a 6:2:2 ratio to evaluate the model’s performance comprehensively. The visualization of sample images from this dataset is shown in
Figure 3.
Known Pig Face Recognition Dataset. This dataset was developed to train a high-performing feature extractor for pig face recognition and to validate the model’s ability to recognize known pig identities. Utilizing the pig face detection model trained on the aforementioned small-scale dataset, we automatically processed a larger collection of images, resulting in 20,000 images from 56 pigs. This approach significantly reduced the need for manual annotation. The dataset is divided into training, validation, and testing subsets with a 6:2:2 ratio, ensuring the model’s robustness in recognizing known pig identities under varied conditions.
Known Pig Face Feature Dataset. To build the Pig Face Feature Gallery, we collected an additional 50 high-quality images for each of the 56 known pigs. These images captured under diverse conditions, were processed using our pre-trained pig face detection model to extract pig face regions. Our face feature extractor model then generated feature vectors from these images, which were stored in the gallery to ensure robust feature representation.
Unknown Pig Face Feature Dataset. For dynamic registration, we collected a dataset of 9 pigs not included in the training phase, comprising 270 images per pig under varied conditions. From these, 30 images per pig were used for feature registration. These images were processed using our detection model and feature extractor to generate feature vectors, facilitating the seamless integration of new pigs into the feature gallery without retraining.
Unknown Pig Face Test Dataset. To evaluate the generalization and adaptability of our recognition model, we collected a test dataset comprising 900 images from the same 9 pigs in the Unknown Pig Face Feature Dataset. These images were captured under diverse conditions to simulate real-world scenarios, including variations in lighting, growth stages, and environmental factors. This dataset was designed to test the model’s ability to recognize previously unseen pigs and assess its robustness and adaptability in dynamic farm environments.
Figure 4 provides illustrative samples from the dataset, highlighting representative pig images captured under various lighting and environmental conditions.
65 Known Pig Face Testing Dataset. The dataset is designed to evaluate the performance of our pig face recognition model in Closed-Set scenarios after the registration of new classes. This dataset merges both the testing set of the Known Pig Face Recognition Dataset (56 pigs) and the Unknown Pig Face Test Dataset (9 pigs) to simulate a realistic environment where new pigs are introduced and registered into the system.
2.3. Training Phase
In this study, we developed a decoupled pig face detection and recognition system comprising several interconnected modules designed for accurate identification of pigs. The system includes the Pig Face Detection Module and the Pig Face Recognition Module.
2.3.1. Pig Face Detection Module
To optimize resource utilization and effectively adapt to both training and testing phases, we developed a high-performance pig face detection model based on the YOLOv8 [
26] architecture. By training YOLOv8 on a small, manually annotated dataset of pig faces, the model learned to accurately identify and localize pig faces in various conditions. Once trained, YOLOv8 was utilized to process a larger collection of images, automatically detecting and cropping pig faces without the need for manual intervention. This automation significantly reduced the time and labor associated with dataset creation, ensuring consistency in the extracted face images and enhancing the scalability of our recognition system. This approach allows for efficient processing of extensive datasets, facilitating the development of robust pig face recognition models.
YOLOv8 integrates a CSPDarknet backbone [
27] designed to improve the efficiency of feature extraction while maintaining high accuracy. The backbone’s Cross-Stage Partial Network (CSP) [
28] structure divides the network into segments containing multiple residual blocks, reducing computational load and the number of parameters—crucial for processing large datasets effectively. The YOLOv8 detection head consists of convolutional and pooling layers that process feature maps, which are then converted into precise detection outputs through additional convolutional layers and fully connected layers. This structured approach ensures that each detected pig face is accurately cropped and prepared for the subsequent recognition stage.
YOLO-v8 total loss consists of class loss, objectness loss, and location loss:
: Ensures accurate classification of detected objects into correct categories. : Assesses the accuracy in identifying whether a segment contains a pig face. : Measures the precision in locating and sizing detected objects.
By leveraging the advanced capabilities of YOLOv8, we effectively reduce the human input required in the preprocessing stages, enhancing the efficiency and scalability of our pig face recognition system. This approach not only conserves resources but also ensures that the system is robust and adaptable to various operational scenarios within smart farming environments.
2.3.2. Pig Face Recognition Module
For the pig face recognition component, we employ a customized Vision Transformer (ViT) architecture designed to function as a robust feature extractor, capable of capturing intricate facial details necessary for accurate identification. Our approach leverages the strengths of pre-trained ViT models while introducing modifications to enhance feature discriminability and robustness against intra-class variations.
The proposed model is designed to extract and refine image features, leveraging the power of Vision Transformer (ViT) architecture and a tailored embedding mechanism to generate highly discriminative feature embeddings as shown in
Figure 5. The process begins with the input image, which has a batch shape of 3 × 224 × 224. These images are divided into fixed-size patches of 16 × 16, resulting in a total of 196 patches for each image. Each patch is flattened into a 768 vector of size (16 × 16 × 3). and then linearly projected into a fixed embedding dimension, forming an output tensor of shape (Batch size × 196 × 768). To retain positional information, learnable positional embeddings are added to these patch embeddings, maintaining the shape (Batch size × 196 × 768). This tensor is then passed through the Transformer Encoder, the backbone of ViT, which captures global context and relationships among patches using Multi-Head Self-Attention (MSA) and Feed-Forward Networks (FFN). MSA computes attention weights across patches to derive globally meaningful features, while FFN enhances feature representation through non-linear transformations. Each Transformer Encoder layer includes residual connections and Layer Normalization for stability and convergence. The ViT-Base model contains 12 such layers, processing the input tensor while retaining its shape (Batch size × 196 × 768).
A learnable classification token (CLS token) is prepended to the patch embeddings before entering the Transformer Encoder. After processing, the CLS token, now containing the global representation of the input image, is extracted, resulting in a tensor of shape (Batch size × 768). This serves as the final output of the ViT backbone.
The embedding layer further refines this high-dimensional global feature into a lower-dimensional space suitable for downstream tasks. This layer includes a sequence of transformations: a fully connected layer reduces the feature dimension from 768 to 512, followed by Batch Normalization, ReLU activation, and Dropout for regularization. Another fully connected layer then maps the feature to the target embedding dimension of 64, followed by Batch Normalization, yielding the final feature embeddings of shape (Batch size, 64).
To enhance the discriminative capability of the feature extractor, we adopt a dual-loss training strategy that combines SubCenterArcFace Loss and Center Loss. This approach ensures that the model not only learns to classify known pigs accurately but also maintains robust feature representations that generalize well to unknown individuals, addressing the Open-Set recognition challenge.
where
and
are hyperparameters that balance the contributions of the classification loss and the metric loss.
is set to 1, and
is set to 0.5.
SubCenterArcFace Loss. This Loss is an advanced adaptation of the ArcFace Loss, renowned in facial recognition technologies for enhancing the discriminative power of the feature embeddings. The key modification in SubCenterArcFace Loss is the introduction of multiple sub-centers for each class in the feature space. This approach is designed to deal more effectively with intra-class variations by allowing multiple centroids per class, reducing the penalty for intra-class deviations that are still within a reasonable range of the true class center. This flexibility helps to capture a more nuanced representation of each pig’s facial features, accommodating slight variations that are natural among different individuals within the same category.
is the angle between the feature vector associated with the i-th example and the weight vector associated with the j-th class. arccos means that calculates the angle from the cosine similarity. finds the maximum value among all classes k.
Center Loss. It complements the metric learning loss by promoting intra-class compactness. It minimizes the distances between feature vectors and their corresponding class centers, ensuring that features of the same pig are tightly clustered in the embedding space. This loss is particularly effective in reducing intra-class variance, thereby enhancing the model’s ability to generalize to new, unseen pigs in Open-Set scenarios. The Center Loss is defined as:
where
is the feature vector of the i-th sample, and
.
By combining SubCenterArcFace Loss with Center Loss, our dual-loss structure ensures that the feature extractor not only maximizes inter-class separability but also maintains strong intra-class compactness. This synergy significantly improves the model’s discriminative power, enabling it to capture subtle differences among pig faces while remaining robust to variations within the same class. Consequently, the recognition model achieves high accuracy in both Closed-Set and Open-Set scenarios, effectively addressing the challenges inherent in PFOSR.
2.4. Registration and Face Gallery Updating System
This stage involves constructing and maintaining the Pig Face Feature Gallery, enabling the system to adapt to changes in the pig population dynamically. The Pig Face Feature Gallery serves as a repository for feature vectors extracted from images of known pigs, organized as a three-dimensional array with dimensions (, , Embedding vector), where represents the number of distinct pig identities (initially 56), denotes the and number of images per pig (30). The embedding vector corresponds to the size of the feature vector extracted by the ViT model (dimension = 768). The gallery’s initial dimensions are (56, 30, 768), representing feature vectors of 56 known pigs.
Feature Matching. Our system employs cosine distance to evaluate the similarity between feature vectors during both the new pig registration and inference recognition stages in PFOSR. Cosine similarity measures the cosine of the angle between two vectors in a multi-dimensional space, effectively assessing their similarity regardless of vector magnitude. This metric is ideal for determining whether two feature vectors belong to the same pig.
where
A and
B are two pig face feature vectors,
denotes the dot product of the vectors, and
and
are their magnitudes (norms).
Initial Registration of Known Pigs. In the initial registration stage, we construct the Pig Face Feature Gallery using the Known Pig Face Feature Dataset. This dataset contains high-quality images of 56 known pigs, with 50 images per pig captured under diverse conditions to ensure robust feature representation. Each image is processed using the pre-trained detection model (YOLOv8) to detect and crop pig faces, and then the ViT feature extractor generates a 768-dimensional feature vector for each pig face. These feature vectors are organized in the feature gallery with dimensions (, , 768), where = 56, = 30. This gallery serves as the reference for recognizing known pigs during the inference.
Dynamic Registration of New Pigs. During the inference, when the system encounters pigs not present in the initial feature gallery, it utilizes the Unknown Pig Face Feature Dataset for dynamic registration. For each pig face detected in the testing dataset, the ViT feature extractor generates a feature vector, which is compared against the existing feature gallery using cosine similarity. If the highest similarity score falls below a predefined threshold (e.g., 0.85), the pig is classified as new. Upon classification as a new pig, the system registers the individual by incorporating additional data from the Unknown Pig Face Feature Dataset. Specifically, 50 images of the new pig are collected under various angles and conditions to capture a comprehensive representation. These images are processed through the detection model and the ViT feature extractor to generate feature vectors, which are then added to the Pig Face Feature Gallery. The gallery is expanded to include the new individual, updating its dimensions to (, , 768), where increments by one for each new pig registered. This dynamic updating mechanism ensures that the system remains current with the changing pig population. By continuously expanding the feature gallery to include new pigs, the system enhances its scalability and efficiency, maintaining high recognition accuracy without the need for re-training. This approach allows the system to adapt to the dynamic nature of farm environments, where new pigs may be introduced regularly.
In summary, the above pipeline of three stages of PFOSR works subsequently during the inference time. In the first stage, the YOLOv8 model detects pig faces in the images, and the detected faces are cropped for the next feature extraction stage. Then the cropped pig face images are input into the ViT model to extract feature vectors. After feature extraction, each feature vector is compared against all vectors in the Pig Face Feature Gallery using cosine similarity. The pig ID corresponding to the highest similarity score is assigned to the detected pig.
4. Discussion
This study establishes a high-performing three-stage pipeline that integrates advanced pig face detection and recognition techniques with a dynamic registration mechanism, effectively addressing key challenges in dynamic livestock farming environments. The pig face detection stage is implemented using YOLOv8, which achieves state-of-the-art performance on the Small-Scale Pig Face Detection Dataset. The model achieved an of 0.990, an of 0.972, and an overall of 0.869, surpassing other YOLO variants such as YOLOv5, YOLOv6, and YOLOv7. These metrics demonstrate the model’s ability to accurately localize pig faces, even under challenging conditions, including varying lighting, occlusions, and complex backgrounds. Additionally, the recall rate of 0.895 further underscores the system’s effectiveness in detecting pig faces, significantly minimizing missed detections.
The second stage of the system, focusing on pig face recognition, utilizes a modified Vision Transformer (ViT) architecture optimized with a dual-loss strategy combining SubCenterArcFace Loss and Center Loss. This model achieved a Closed-Set recognition accuracy (CSA) of 96.60% on the known Pig Face Test Dataset. A key feature of the proposed system is the dynamic registration mechanism, allowing the seamless integration of new pigs into the feature gallery without the need to retain new pigs. The dynamic gallery, which originally contained 56 pigs, was assessed by gradually introducing 9 additional unknown pigs. The system upheld its high performance, achieving an updated AUROC of 94.72% and an F1-Open score of 92.93%, demonstrating its ability to adapt to changes in population size. The gallery size was systematically analyzed to optimize performance. A gallery size of 30 images per pig achieved the best trade-off between computational efficiency and recognition accuracy, with a CSA of 96.88%, AUROC of 95.31%, and OSCR of 95.87% in PFOSR and CSA OF 96.76%, precision of 97.53%, NMI of 97.32%, AMI of precision@1 of 96.76% in PFCSR task. But beyond 30 images, performance gains plateaued. The practical implications of these findings are significant. The system’s ability to achieve high precision in Closed-Set recognition and strong performance in Open-Set scenarios makes it a valuable tool for modern livestock management. By automating the identification and monitoring of pigs, the system reduces labor costs, improves record-keeping accuracy, and enhances animal welfare. Additionally, its dynamic registration mechanism ensures that the system remains operational in real-time, even as the farm population changes, addressing a critical gap in the existing animal recognition system.
Despite these achievements, the study reveals areas for improvement. While the proposed system performs well on specific datasets, its scalability to larger herds remains untested. Larger datasets could potentially introduce computational bottlenecks in real-time operations, necessitating further optimization of the feature-matching process. Expanding the dataset to include diverse breeds and more complex farm environments would enhance the system’s generalizability.