Topic Editors

Department of Engineering, School of Science and Technology, University of Trás-os-Montes e Alto Douro, Vila Real, Portugal
1. School of Science and Technology, University of Trás-os-Montes e Alto Douro (UTAD), 5000-801 Vila Real, Portugal
2. INESC TEC–Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal
Department of Veterinary Science, University of Trás-os-Montes and Alto Douro, 5000-801 Vila Real, Portugal

AI, Deep Learning, and Machine Learning in Veterinary Science Imaging

Abstract submission deadline
31 August 2025
Manuscript submission deadline
31 October 2025
Viewed by
2662

Topic Information

Dear Colleagues,

The topic on "AI, Deep Learning, and Machine Learning in Veterinary Science Imaging" delves into the transformative impact of artificial intelligence and advanced machine learning techniques on diagnostics and imaging in veterinary sciences. This edition explores how AI-driven tools are enhancing the accuracy, speed, and efficiency of diagnostic imaging in veterinary medicine, including applications in radiology, ultrasound, MRI, and CT scans. By integrating deep learning algorithms, veterinary professionals can now detect and diagnose conditions with unprecedented precision, enabling early intervention and personalized treatment plans for companion animals. The issue also covers the challenges and ethical considerations of implementing AI in veterinary practices, as well as the potential for AI to bridge gaps in access to specialized imaging services in remote areas. Through a collection of research articles, case studies, and expert opinions, this special edition serves as an essential resource for veterinarians, researchers, and engineers who are at the forefront of integrating AI into veterinary science.

Dr. Vitor Filipe
Dr. Lio Gonçalves
Dr. Mário Ginja
Topic Editors

Keywords

  • artificial intelligence
  • deep learning
  • machine learning
  • veterinary imaging
  • diagnostic accuracy
  • radiology
  • ultrasound
  • MRI
  • CT scans
  • ethical considerations

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Animals
animals
2.7 4.9 2011 16.1 Days CHF 2400 Submit
Computers
computers
2.6 5.4 2012 15.5 Days CHF 1800 Submit
Information
information
2.4 6.9 2010 16.4 Days CHF 1600 Submit
Journal of Imaging
jimaging
2.7 5.9 2015 18.3 Days CHF 1800 Submit
Veterinary Sciences
vetsci
2.0 2.9 2014 21.2 Days CHF 2100 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (6 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 568 KiB  
Review
Artificial Intelligence in Chest Radiography—A Comparative Review of Human and Veterinary Medicine
by Andrea Rubini, Roberto Di Via, Vito Paolo Pastore, Francesca Del Signore, Martina Rosto, Andrea De Bonis, Francesca Odone and Massimo Vignoli
Vet. Sci. 2025, 12(5), 404; https://doi.org/10.3390/vetsci12050404 - 25 Apr 2025
Viewed by 216
Abstract
The integration of artificial intelligence (AI) into chest radiography (CXR) has greatly impacted both human and veterinary medicine, enhancing diagnostic speed, accuracy, and efficiency. In human medicine, AI has been extensively studied, improving the identification of thoracic abnormalities, diagnostic precision in emergencies, and [...] Read more.
The integration of artificial intelligence (AI) into chest radiography (CXR) has greatly impacted both human and veterinary medicine, enhancing diagnostic speed, accuracy, and efficiency. In human medicine, AI has been extensively studied, improving the identification of thoracic abnormalities, diagnostic precision in emergencies, and the classification of complex conditions such as tuberculosis, pneumonia, and COVID-19. Deep learning-based models assist radiologists by detecting patterns, generating probability maps, and predicting outcomes like heart failure. However, AI is still supplementary to clinical expertise due to challenges such as data limitations, algorithmic biases, and the need for extensive validation. Ethical concerns and regulatory constraints also hinder full implementation. In veterinary medicine, AI is still in its early stages and is rarely used; however, it has the potential to become a valuable tool for supporting radiologists in the future. However, challenges include smaller datasets, breed variability, and limited research. Addressing these through focused research on species with less phenotypic variability (like cats) and cross-sector collaborations could advance AI in veterinary medicine. Both fields demonstrate AI’s potential to enhance diagnostics but emphasize the ongoing need for human expertise in clinical decision making. Differences in anatomy structure between the two fields must be considered for effective AI adaptation. Full article
Show Figures

Figure 1

23 pages, 6490 KiB  
Article
GCNTrack: A Pig-Tracking Method Based on Skeleton Feature Similarity
by Zhaoyang Yin, Zehua Wang, Junhua Ye, Suyin Zhou and Aijun Xu
Animals 2025, 15(7), 1040; https://doi.org/10.3390/ani15071040 - 3 Apr 2025
Viewed by 219
Abstract
Pig tracking contributes to the assessment of pig behaviour and health. However, pig tracking on real farms is very difficult. Owing to incomplete camera field of view (FOV), pigs frequently entering and exiting the camera FOV affect the tracking accuracy. To improve pig-tracking [...] Read more.
Pig tracking contributes to the assessment of pig behaviour and health. However, pig tracking on real farms is very difficult. Owing to incomplete camera field of view (FOV), pigs frequently entering and exiting the camera FOV affect the tracking accuracy. To improve pig-tracking efficiency, we propose a pig-tracking method that is based on skeleton feature similarity, which we named GcnTrack. We used YOLOv7-Pose to extract pig skeleton key points and design a dual-tracking strategy. This strategy combines IOU matching and skeleton keypoint-based graph convolutional reidentification (Re-ID) algorithms to track pigs continuously, even when pigs return from outside the FOV. Three identical FOV sets of data that separately included long, medium, and short duration videos were used to test the model and verify its performance. The GcnTrack method achieved a Multiple Object Tracking Accuracy (MOTA) of 84.98% and an identification F1 Score (IDF1) of 82.22% for the first set of videos (short duration, 87 s to 220 s). The tracking precision was 74% for the second set of videos (medium duration, average 302 s). The pigs entered the scene 15.29 times on average, with an average of 6.28 identity switches (IDSs) per pig during the tracking experiments on the third batch set of videos (long duration, 14 min). In conclusion, our method contributes an accurate and reliable pig-tracking solution applied to scenarios with incomplete camera FOV. Full article
Show Figures

Figure 1

21 pages, 107660 KiB  
Article
YOLOv8A-SD: A Segmentation-Detection Algorithm for Overlooking Scenes in Pig Farms
by Yiran Liao, Yipeng Qiu, Bo Liu, Yibin Qin, Yuchao Wang, Zhijun Wu, Lijia Xu and Ao Feng
Animals 2025, 15(7), 1000; https://doi.org/10.3390/ani15071000 - 30 Mar 2025
Viewed by 300
Abstract
A refined YOLOv8A-SD model is introduced to address pig detection challenges in aerial surveillance of pig farms. The model incorporates the ADown attention mechanism and a dual-task strategy combining detection and segmentation tasks. Testing was conducted using top-view footage from a large-scale pig [...] Read more.
A refined YOLOv8A-SD model is introduced to address pig detection challenges in aerial surveillance of pig farms. The model incorporates the ADown attention mechanism and a dual-task strategy combining detection and segmentation tasks. Testing was conducted using top-view footage from a large-scale pig farm in Sichuan, with 924 images for detection training, 216 for validation, and 2985 images for segmentation training, with 1512 for validation. The model achieved 96.1% Precision and 96.3% mAP50 in detection tasks while maintaining strong segmentation performance (IoU: 83.1%). A key finding reveals that training with original images while applying segmentation preprocessing during testing provides optimal results, achieving exceptional counting accuracy (25.05 vs. actual 25.09 pigs) and simplifying practical deployment. The research demonstrates YOLOv8A-SD’s effectiveness in complex farming environments, providing reliable monitoring capabilities for intelligent farm management applications. Full article
Show Figures

Figure 1

12 pages, 2088 KiB  
Article
Clinical Application of Monitoring Vital Signs in Dogs Through Ballistocardiography (BCG)
by Bolortuya Chuluunbaatar, YungAn Sun, Kyerim Chang, HoYoung Kwak, Jinwook Chang, WooJin Song and YoungMin Yun
Vet. Sci. 2025, 12(4), 301; https://doi.org/10.3390/vetsci12040301 - 24 Mar 2025
Viewed by 759
Abstract
This study evaluated the application of the BCG Sense1 wearable device for monitoring the heart rate (HR) and the respiratory rate (RR) in dogs, comparing its performance to the gold standard ECG under awake and anesthetized conditions. Data were collected from twelve dogs, [...] Read more.
This study evaluated the application of the BCG Sense1 wearable device for monitoring the heart rate (HR) and the respiratory rate (RR) in dogs, comparing its performance to the gold standard ECG under awake and anesthetized conditions. Data were collected from twelve dogs, with six awake beagles and six anesthetized client-owned dogs. Bland–Altman analysis and linear regression revealed strong correlations between BCG and ECG under both awake and anesthetized conditions (HR: r = 0.97, R2 = 0.94; RR: r = 0.78, R2 = 0.61, and p < 0.001). While slight irregularities were noted in respiratory rate measurements in both groups, potentially affecting the concordance between methods, BCG maintained a significant correlation with ECG under anesthesia (HR: r = 0.96, R2 = 0.92; RR: r = 0.85, R2 = 0.72, and p < 0.01). The wearable BCG-Sense 1 sensor enables continuous monitoring over 24 h, while ECG serves as the gold standard reference. These findings prove that BCG can be a good alternative to ECG for the monitoring of vital signs in clinical, perioperative, intraoperative, and postoperative settings. The strong correlation between the BCG and ECG signals in awake and anesthetized states highlights the prospects of BCG technology as a revolutionary method in veterinary medicine. As a non-invasive and real-time monitoring system, the BCG Sense1 device strengthens clinical diagnosis and reduces physiological variations induced by stress. Full article
Show Figures

Figure 1

27 pages, 8269 KiB  
Article
Evaluating Optimal Deep Learning Models for Freshness Assessment of Silver Barb Through Technique for Order Preference by Similarity to Ideal Solution with Linear Programming
by Atchara Choompol, Sarayut Gonwirat, Narong Wichapa, Anucha Sriburum, Sarayut Thitapars, Thanakorn Yarnguy, Noppakun Thongmual, Waraporn Warorot, Kiatipong Charoenjit and Ronnachai Sangmuenmao
Computers 2025, 14(3), 105; https://doi.org/10.3390/computers14030105 - 16 Mar 2025
Viewed by 295
Abstract
Automating fish freshness assessment is crucial for ensuring quality control and operational efficiency in large-scale fish processing. This study evaluates deep learning models for classifying the freshness of Barbonymus gonionotus (Silver Barb) and optimizing their deployment in an automated fish quality sorting system. [...] Read more.
Automating fish freshness assessment is crucial for ensuring quality control and operational efficiency in large-scale fish processing. This study evaluates deep learning models for classifying the freshness of Barbonymus gonionotus (Silver Barb) and optimizing their deployment in an automated fish quality sorting system. Three lightweight deep learning architectures, MobileNetV2, MobileNetV3, and EfficientNet Lite2, were analyzed across 18 different configurations, varying model size (Small, Medium, Large) and preprocessing methods (With and Without Preprocessing). A dataset comprising 1200 images, categorized into three freshness levels, was collected from the Lam Pao Dam in Thailand. To enhance classification performance, You Only Look Once version 8 (YOLOv8) was utilized for object detection and image preprocessing. The models were evaluated based on classification accuracy, inference speed, and computational efficiency, with Technique for Order Preference by Similarity to Ideal Solution with Linear Programming (TOPSIS-LP) applied as a multi-criteria decision-making approach. The results indicated that the MobileNetV3 model with a large parameter size and preprocessing (M2-PL-P) achieved the highest closeness coefficient (CC) score, with an accuracy of 98.33% and an inference speed of 6.95 frames per second (fps). This study establishes a structured framework for integrating AI-driven fish quality assessment into fishery-based community enterprises, improving productivity and reducing reliance on manual sorting processes. Full article
Show Figures

Figure 1

14 pages, 19850 KiB  
Article
Intelligent Deep Learning and Keypoint Tracking-Based Detection of Lameness in Dairy Cows
by Zongwei Jia, Yingjie Zhao, Xuanyu Mu, Dongjie Liu, Zhen Wang, Jiangtan Yao and Xuhui Yang
Vet. Sci. 2025, 12(3), 218; https://doi.org/10.3390/vetsci12030218 - 2 Mar 2025
Cited by 1 | Viewed by 699
Abstract
With the ongoing development of computer vision technologies, the automation of lameness detection in dairy cows urgently requires improvement. To address the challenges of detection difficulties and technological limitations, this paper proposes an automated scoring method for cow lameness that integrates deep learning [...] Read more.
With the ongoing development of computer vision technologies, the automation of lameness detection in dairy cows urgently requires improvement. To address the challenges of detection difficulties and technological limitations, this paper proposes an automated scoring method for cow lameness that integrates deep learning with keypoint tracking. First, the DeepLabCut tool is used to efficiently extract keypoint features during the walking process of dairy cows, which enables the automated monitoring and output of positional information. Then, the extracted positional data are combined with temporal data to construct a scoring model for cow lameness. The experimental results demonstrate that the proposed method tracks the keypoint of cow movement accurately in visible-light videos and satisfies the requirements for real-time detection. The model classifies the walking states of the cows into four levels, i.e., normal, mild, moderate, and severe lameness (corresponding to scores of 0, 1, 2, and 3, respectively). The detection results obtained in real-world real environments exhibit the high extraction accuracy of the keypoint positional information, with an average error of only 4.679 pixels and an overall accuracy of 90.21%. The detection accuracy for normal cows was 89.0%, with 85.3% for mild lameness, 92.6% for moderate lameness, and 100.0% for severe lameness. These results demonstrate that the application of keypoint detection technology for the automated scoring of lameness provides an effective solution for intelligent dairy management. Full article
Show Figures

Figure 1

Back to TopTop