Next Article in Journal
Preparation and Evaluation of an Organic Value-Added Suspension Fertilizer Using Liquid Waste
Previous Article in Journal
Biological and Environmental Aspects of Imidazole Derivatives as Potential Insect Growth Regulators in Pest Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models

by
Sara Marchegiani
1,†,
Stefano Chiappini
2,†,
Md Abdul Mueed Choudhury
1,
Guangxin E
3,
Maria Federica Trombetta
1,
Marina Pasquini
1,
Ernesto Marcheggiani
1 and
Simone Ceccobelli
1,*
1
Department of Agricultural, Food and Environmental Sciences, Università Politecnica delle Marche, 60131 Ancona, Italy
2
Department of Construction, Civil Engineering and Architecture, Università Politecnica delle Marche, 60131 Ancona, Italy
3
College of Animal Science and Technology, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agriculture 2025, 15(24), 2567; https://doi.org/10.3390/agriculture15242567
Submission received: 31 October 2025 / Revised: 30 November 2025 / Accepted: 10 December 2025 / Published: 11 December 2025
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)

Abstract

Morphometric traits are closely linked to body condition, health, welfare, and productivity in livestock. In recent years, contactless 3D reconstruction technologies have been increasingly adopted to improve the accuracy and efficiency of morphometric evaluations. Conventional approaches for 3D reconstruction mainly employ Light Detection and Ranging (LiDAR) or photogrammetry. In contrast, emerging Artificial Intelligence (AI)-based methods, such as Neural Surface Reconstruction, 3D Gaussian Splatting, and Neural Radiance Fields, offer new opportunities for high-fidelity digital modeling. Smartphones’ affordability represents a cost-effective and portable platform for deploying these advanced tools, potentially supporting enhanced agricultural performance, accelerating sector digitalization, and thus reducing the urban–rural digital gap. This preliminary study assessed the viability of using smartphone-based LiDAR, photogrammetry, and AI models to obtain body measurements of Marchigiana cattle. Five morphometric traits manually collected on animals were compared with those extracted from smartphone-based 3D reconstructions. LiDAR measurements offer more consistent estimates, with relative error ranging from −1.55% to 4.28%, while photogrammetry demonstrated accuracy ranging from 0.75 to −14.56. AI-based models (NSR, 3DGS, NeRF) reported more variability between accuracy results, pointing to the need for further refinement. Overall, the results highlight the preliminary potential of portable 3D scanning technologies, particularly LiDAR-equipped smartphones, for non-invasive morphometric data collection in cattle.

1. Introduction

The increasing global demand for animal-derived proteins, driven by rapid population growth and evolving dietary preferences, necessitates the adoption of sustainable and efficient strategies in beef production systems [1]. In this context, Precision Livestock Farming (PLF) technologies, such as sensor networks, Internet of Things devices, and Artificial-Intelligence-based monitoring tools, offer promising solutions by enabling real-time assessment of animal health, welfare, and productivity [2,3].
Advancements in imaging and sensor-based systems have demonstrated considerable potential to reduce operational costs, and promote more sustainable livestock management practices, thus enhancing production efficiency [4]. However, their widespread adoption remains constrained by accessibility, user-friendliness, and the technical capacity of end users. As recently highlighted by the European Commission, digital innovations must be tailored to farmers’ needs, minimizing administrative burden and facilitating practical implementation [5].
Addressing these challenges requires inclusive strategies that bridge the gap between research and farm practices. Strengthening knowledge exchange frameworks such as the Agricultural Knowledge and Innovation System (AKIS) and the Standing Committee on Agricultural Research (SCAR), along with dedicated training and advisory services, is essential to enable effective and equitable adoption of technologies [6,7]. Moreover, leveraging widely available tools like smartphones can support the integration of PLF solutions into daily farm operations, contributing to a more sustainable, resilient, and competitive livestock sector [8]. A clear example is this case study, which applies advanced technologies like LiDAR, photogrammetry, and AI models using portable and widely accessible devices such as smartphones. This study was conducted on a small-medium-sized cattle farm, to demonstrate the feasibility and advantages of integrating these tools into traditional EU’s farming systems.
This case study focused on the Marchigiana cattle, a traditional Italian beef cattle breed highly valued for its resilience, adaptability, and high-quality meat production [9]. This breed is well-widespread in Central-Southern Italy and traditionally raised according to the semi-extensive cow-calf systems. It is also bred in several countries outside of Europe, such as the United States, Canada, Brazil, Argentina, and Australia [10].
In the beef livestock sector, accurate estimation of body size is essential for optimizing feed intake and growth performance, enhancing breeding programs carried out in genetic centers, ensuring consistent meat quality, and reducing the overall environmental footprint [11]. The assessment of this trait is generally performed through manual measurements using scales, measuring tapes, and sticks [12,13]. However, this approach can induce stress in cattle, posing serious risks to farm workers, and affecting the progress of beef cattle breeding [14,15]. Moreover, manual measurements are susceptible to variability in anatomical landmark placement and differences in operator experience [16]. To address these limitations, non-contact approaches based on 3D imaging have recently been introduced. The 3D model reconstruction makes it possible to extract and repeat several morphometric measurements whenever necessary, thereby minimizing subjectivity and offering a clear advantage over traditional manual methods. For instance, Gaudioso et al. [17] proposed the use of photogrammetry to measure morphometric traits of cattle using a portable instrument “Photozoometer” equipped with two synchronized cameras. Recent studies have investigated the use of innovative technologies to estimate livestock body parameters, including body shape, weight, growth performance, and yield [18,19,20,21]. Huang et al. [22] proposed a three-dimensional (3D) digital modeling approach for non-contact body measurement of Quichuan cattle using LiDAR sensor. They also improved their study by implementing a Deep Learning approach to measure body measurements of cattle [23]. Le Cozler et al. [24] designed a cattle body scanning system using laser beams to achieve high-precision 3D shape reconstruction. Nilchuen et al. [25] proposed the use of mobile phone to obtain body measurement and regression model to estimate the weight of animals, obtaining results of 50% of accuracies using chest depth measurement and body condition score information. A real-time system for acquiring 3D point clouds of beef cattle was also proposed by Li et al. [26], enabling stress-free measurements of the animals in an acquisition time of only 0.08 s. Subsequently, a series of algorithms for point cloud preprocessing, registration, and 3D reconstruction were employed to extract body dimension data [27,28]. The development of computer vision-based applications for estimating livestock body dimensions represents a significant advancement towards automation and decision-making in animal husbandry, enabling accurate assessments without direct physical contact with the animals [4,29,30]. Cominotte et al. [18] developed an automatic algorithm to extract measurement points such as the back and buttocks for beef cattle. Structure from Motion photogrammetry has also been proposed to reconstruct 3D models of dairy cows using the Huawei P20 smartphone, with subsequent refinement of the cow’s point cloud using RANdom SAmple Consensus and Euclidean clustering [31]. Deep learning (DL) methods were used to estimate the body weight of beef cattle using 3D point cloud data [31,32]. Among recent advancements in DL methods, Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) represent pivotal innovations in rendering models [33,34]. Both techniques enable the creation of photorealistic 3D scenes using only a series of photographs of the observed object. Recently, Nilchuen et al. [35] proposed a smartphone-based system for real-time body measurement and weight estimation in Brahman cattle using a cloud-integrated artificial intelligence model, with the aim to provide a low-cost and user-friendly tool for digital phenotyping. Despite this progress, several challenges remain in implementing non-contact body measurements in livestock, including the acquisition of high-quality point cloud data, the identification of anatomical landmarks, and the extraction of reliable morphometric traits. In practice, several issues need to be carefully handled. For example, when animals move freely, noise levels can increase, point cloud sections may be missing or incomplete, and inconsistencies can arise from object deformation. If these sources of error are not managed, they can compromise measurement precision and reproducibility [36,37].
Although substantial advancements have been achieved in 3D animal body reconstruction, widely applicable methodologies for 3D acquisition using affordable and easy-to-use devices, such as smartphones, are still lacking. This approach would also align with the EU’s goals of advancing technological digitalization in the agri-livestock sector.
To address these gaps, the present preliminary study aims to explore a 3D reconstruction approach by leveraging a smartphone’s optical camera and LiDAR scanner to acquire beef cattle body measurements through point clouds and meshes. Specific objectives of the current investigation are (1) to evaluate a portable scanning system for reconstructing the 3D shape of cattle, (2) to extract livestock information through 3D point clouds and meshes, evaluating the accuracy of manual versus digital measurements of three beef cows.

2. Materials and Methods

2.1. Location and Animals’ Descriptions

The experimental data for this study were collected in 2024 on a commercial beef cattle farm located in the Marche region, Italy. The animals were housed in a tie-stall barn system typical for beef cattle production. The study involved three Marchigiana cows in the first trimester of pregnancy, all registered in the official Herd Book [10]. The animals had an average live weight of 756 ± 75 kg.

2.2. Morphometric Measurements

To evaluate the accuracy of the 3D reconstructions, five linear morphometric measurements were recorded. The measurements recorded are listed below and depicted in Figure 1:
  • Body Length (BL): distance from the shoulder tip—scapulohumeral joint—to the tip of the ischial tuberosity—the back of the croup.
  • Chest Height (CH): vertical distance from the highest to lowest point of the chest.
  • Chest Width (CW): frontal distance between the outermost points of the chest.
  • Rump Length (RL): distance from the coxal tuberosity—the tip of the hip—to the ischial tuberosity—the tip of the ilium/buttocks.
  • Wither Height (WH): vertical distance from the ground to the highest point of the wither.

2.3. Digital Tools’ Technical Specifications

A portable scanning system was implemented using an iPhone® 14 Pro Max. During the image acquisition, the iPhone 14® plays a dual role: the LiDAR sensor provides essential data on volumetric structure and distances, whereas the RGB (Red, Green, Blue) camera captures color information of the animal’s coat. Integrating these two data streams enables the creation of a 3D model that faithfully reproduces each specimen’s authentic appearance, providing opportunities to examine parameters such as body conformation. Additionally, the handheld scanning procedure via smartphone is very intuitive and guided, as the user is alerted if the device needs to be repositioned for optimal scanning. The user is allowed to capture the cow from multiple angles, including environmental markers, to obtain a detailed 3D model. However, the resulting models must then be subjected to further processing and filtering to facilitate the advanced analyses required by scientific investigations. The adoption of LiDAR technology was based on its superior capability to deliver precise distance measurements, maintain independence from lighting conditions in varying environments, in contrast to image-based system that while generally more cost-effective exhibit lower reliability for high-precision application [22].
The LiDAR sensor embedded in the iPhone® 14 Pro Max (Figure 2) can measure distances to objects up to approximately 5 m, operating at the photon level on nanosecond timescales, and it is well-suited for indoor and outdoor use, enabling accurate dept perception and supporting precise 3D reconstruction.
Each cow was digitally reconstructed using the following 3D modeling applications: Recon-3D v1.9 [38], KIRI Engine v3.13 [39], and Luma AI v1.0 [40]. Recon-3D enables the use of LiDAR and photogrammetry technology, while KIRI Engine and Luma AI enable image processing using neural networks. Specifically, KIRI Engine uses the Neural Surface Reconstruction (NSR) [39] and 3D Gaussian Splatting (3DGS) [33] models, while Luma AI allows the use of the NeRF model [34].
All three applications can be used in free mode or through a paid license. Table 1 shows the main features of the three applications.
These iOS applications were considered for their simple and intuitive acquisition settings with the possibility of varying the parameters in relation to the target. Moreover, recent investigation on livestock imaging has demonstrated that NeRF-based models can accurately recover volumetric traits of dairy cattle under field conditions, further supporting the inclusion of LUMA AI and KIRI Engine in this study [41].
Recon-3D, built on the EveryPoint engine [42], simultaneously acquires LiDAR data, video streams (1920 × 1440 pixels), and device orientation/positional information derived from its integrated gyroscope and accelerometer. The ranges recorded by the LiDAR sensor are converted into a depth map, which is combined with individual video frames to produce colored 3D point clouds. Alternatively, users can opt to process the acquired dataset through digital photogrammetry principles [43]. The results can be exported in ASTM E57 or other formats. Furthermore, Recon-3D supports either on-device or cloud-based data processing, thus enabling rapid post-scan checks [44].
The Pro version of KIRI Engine was used because it offers enhanced features compared to the free version, ensuring an improved visual reconstruction via NSR and 3DGS of objects lacking distinctive features. This version captures images at up to 30 fps at 1080p resolution. Most rendering-based NSR methods adopt NeRF as a backbone, as it provides a foundational architecture for learning high-fidelity spatial representations from image sequences, enabling realistic geometry and appearance reconstruction [45,46,47,48] and employs a signed distance function [49]. In both models, the software operates in video capture mode, scanning objects rapidly, following principles analogous to those used by intraoral scanners. The output consisted of individual point clouds in .las format for both NeRF and NSR models, while the 3DGS model produced outputs in .obj format.
In parallel, Luma AI was evaluated for generating photorealistic 3D models from standard smartphone videos by exclusively relying on a reconstruction approach based on the NeRF model. In this configuration, the mobile device captures a sequence of images as the user moves around the subject, thereby gathering valuable information about the subject’s geometry and ambient luminance from multiple angles. The acquired images are subsequently uploaded to the Luma AI platform which, through its comprehensive processing pipeline, reconstructs a realistic, textured 3D mesh. Notably, the system integrates the lighting conditions captured into the reconstruction process, ensuring that the virtual model faithfully replicates the visual characteristics of the real environment [50]. Thus, the 3D models created with Luma AI are returned in .ply format.
To ensure the accuracy and scalability of the 3D models generated by smartphone applications, checkerboard targets have been positioned as ground control points.

2.4. Data Collection

The data were collected in the following order: (1) manual acquisition of animals’ morphometric measurements using conventional procedure; (2) digital acquisition of animal’ shapes using smartphone’s camera and sensors; (3) processing of 3D point clouds and meshes of the animals; (4) extraction of morphometric measurements from the 3D point clouds and meshes of the animals; and (5) evaluation of the accuracy of the digital approach.

2.5. Manual Acquisition of Animals’ Morphometric Measurements Using Conventional Procedure

Morphometric measurements of live animals were taken manually using a standard measuring stick, in accordance with the procedures provided by ANABIC technical services [10]. The animals were manually restrained, and no sedatives were used. Each measurement was recorded for each cow three times by a trained operator, from the left side of the animal while it was standing on a hard and flat surface, assuming a natural position. To minimize the effect of errors, the measurements were always made by the same trained operator.

2.6. Digital Acquisition of Animals’ Shape Using Smartphone’s Camera and Sensors

A schematic representation was developed to illustrate the methodology used for generating the 3D models of animals, which were then used to digitally extract morphometric measurements (Figure 3).
The positioning and measurement of control points (targets) for both geometric alignment and scale correction of the 3D model, before scanning the animals with the smartphone, are fundamental requirements to obtain ground truth reference for point clouds and meshes. The control points usually need to be placed both at ground level and atop vertically oriented wooden rods. In this study, separate targets were used, consisting of four checkerboard markers printed on a white sheet, as shown in Figure 4.
To scan animals using iPhone® 14 Pro Max, an operator led each cow to the designated test area one at a time, ensuring that the animal remained stationary during scanning to minimize motion artifacts. According to the scientific literature, the scanning process was carried out applying the “hand-held Personal Laser Scanning” (PLS), maintaining an average distance of 1–2 m from the animal, tilting the camera about 30–45° relative to the horizontal plane, to capture multiple angles, ensuring a high level of detail [51]. Scanning operations were conducted in a service shed next to the barn by completing three laps around the animal, varying both the scanning height and the camera inclination to achieve comprehensive and complete coverage of the animal’s shape, including the back and lower abdomen (Figure 5).
The software applications update in real-time on the mobile screen, allowing the operator to promptly identify gaps or distortions during the scanning process and make corrective movements. This functionality enhances the success rate of capturing complete and accurate data, allowing for immediate verification of the correct data acquisition. However, the main challenge during scanning is to perform the process as quickly as possible to avoid animal movement, which may, in turn, reduce the accuracy of the 3D image reconstruction. Each scanning session, including the five application-technology combinations, was performed for each animal, following the same acquisition procedure and maintaining consistent duration across replicates. On average, the scanning time per animal ranged from approximately 3 to 5 min, obtaining dimension files ranging from 1 to 68,933 KB. Further details of the 3D models obtained from individual scans are provided in Table 2.

2.7. Processing of 3D Point Clouds and Meshes of the Animals

2.7.1. Model Scaling

To obtain reliable geometric measurements, the 3D model must be scaled using the ground control points. In this study, center to center distances among four checkerboard markers were measured with a tape. For each reconstruction, the most visible marker pair was selected. The scale factor was computed as the ratio between the measured distance and the corresponding distance measured in the point clouds and meshes [52]. This factor was then applied to the entire model in CloudCompare v2.13.1 using the Multiply/Scale function [53].
This workflow enables accurate and scaled 3D reconstruction, suitable for the digital detection of morphometric measurements, consistent with the principles of core 3D reconstruction.

2.7.2. Model Filtering

Subsequently, in CloudCompare, the aligned 3D models were manually filtered using the “Clipping” command, by identifying at least 3 target points. This procedure allowed for the removal of irrelevant areas of the surrounding environment, retaining only the 3D model of the specific animal under investigation. Examples of 3D point clouds and meshes from individual applications are shown in Figure 6, Figure 7 and Figure 8.

2.8. Extraction of Morphometric Measurements from the 3D Point Clouds and Meshes of the Animals

After removing noise from the 3D models, detection of morphometric measurements on each digital cow reconstruction was performed using CloudCompare software, following a careful and operator-guided procedure.
Each digital measurement investigated was recorded three times by three operators. The cross-section tool provided by the software was used to identify points to be measured (Figure 9). In this way, the dimensional measurements of the selected object were recorded from the box thickness as X, Y and Z axes, in cm. In particular, the Y-axis described the measurements of Body Length, Rump Length, the X-axis explained the Chest Width (Figure 10), while the Z-axis detected the measurements of Wither and Chest Height (Figure 11).

2.9. Evaluation of the Accuracy of the Digital Approach

The validation and accuracy of 3D animals’ reconstructions were evaluated by comparing morphometric values obtained from live animals (manual) with those extracted from the corresponding 3D reconstructions (digital) using relative error (r.e.) accuracy, Pearson correlation coefficient (r), Coefficient of Variation (CV), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), linear regression analysis, determination coefficient (R2). Variations among manual and digital measurements were also tested using Bland–Altman plots.
For each body measurement investigated, both manual and digital, the average of the repetitions was calculated to allow comparisons, following the study by Gaudioso et al. [17].
Following Pérez-Ruiz et al. [54], the relative error was calculated by subtracting the digital measurement from the manual measurement, dividing the result by the manual measurement, and multiplying by 100. This percentage quantifies the deviation of the digital from the manual measurement.
R e l a t i v e   e r r o r   a c c u r a c y   ( % ) = ( M a n u a l   m e a s u r e m e n t D i g i t a l   m e a s u r e m e n t ) M a n u a l   m e a s u r e m e n t × 100
Pearson correlations were calculated between manual and digital measurements for the five body traits considered and among the five technologies.
A graphical representation was used to illustrate the average accuracy of relative errors for each body measurement evaluated across the tested technologies.
Repeatability was assessed as intra-operator variability by performing measurements three times, within a short amount of time, with the same measurement method, and from each cow. The reproducibility of the scanning method was evaluated as inter-operator variability by three independent operators who extracted digital measurements three times from each 3D image. Coefficients of variation (CV) for repeatability and reproducibility were calculated from their respective means (μ) and standard deviations (σ) [24].
RMSE and MAPE were calculated to quantify the methods accuracy, while linear regression and coefficient of determination were employed to ass the consistency and agreement between different measurements. Furthermore, Bland–Altman plots were performed to evaluate agreement between manual and digital measurement, reporting mean bias and its 95% confidence intervals (CI) along with the limits of agreement.
All statistical analyses were performed using R Studio software v 2025.09.1+401 [55].

3. Results

The smartphone’s optical camera and LiDAR scanner were used to demonstrate their ability to acquire 3D shape of Marchigiana cattle. The point clouds and meshes generated by Recon-3D, KIRI Engine, and Luma AI applications have enabled the testing of a portable scanning system for reconstructing the 3D shapes of cattle using LiDAR sensor, photogrammetry, and neural networks such as NSR, 3DGS, and NeRF models.
The results of the manual and digital measurements, for each animal and for each technology used, are shown in Table 3. The measurements are reported as average of repetitions performed, along with relative error results, providing a preliminary quantification of deviation between manual and digital measurements. Most digital measurements generally showed higher values compared to manual ones. The smallest deviation was observed in Wither Height (0.09%) in cow 1 reconstructed using KIRI Engine 3DGS, while the largest deviation was observed in Chest Width (−39.00%) in cow 2 reconstructed using Recon-3D Photogrammetry. However, generally the measurements performed with Recon-3D application presented less deviation than KIRI Engine application, which showed larger deviations, particularly in Chest Height, Chest Width, and Rump Length measurements.
In Table 4 the mean and standard deviation of the relative error accuracies, across the three cows under investigation for each morphometric measurement, are presented. Across all digital methods, Body Length and Wither Height measurements showed the highest accuracy between Recon-3D Photogrammetry measurements and the manual ones. The lowest relative errors recorded between the application-technology combinations ranged from 0.34% (Luma AI NeRF) to −4.22% (KIRI Engine NSR), corresponding to Rump Length and Wither Height measurements, respectively. In contrast, the highest values of relative errors obtained from each combination ranged from −7.92% (Luma AI NeRF) to −14.56% (Recon-3D Photogrammetry), corresponding to Chest Height and Chest Width measurements, respectively, indicating overestimation of the digital measurements. The Pearson correlation coefficient (Table 4) was calculated to assess the strength and direction of the linear relationship between manual and digital measurements. Positive high correlations resulted between Recon-3D LiDAR measurements (ranging from 0.67 to 0.96), except for Rump Length (0.12). Instead, negative correlations resulted in using the KIRI Engine 3DGS combination (ranging from −0.73 to −0.92), while the other technologies exhibited variable levels of correlation.
To provide a clearer visualization of the discrepancies in measurement accuracy, Figure 12 presents a comparison of the mean relative errors of each morphometric measurement obtained using the five tested application-technology combinations. A similar graphical comparison, showing the results grouped by application–technology combinations, is provided in Figure S1.
Repeatability and reproducibility results are presented in Table S1a–c. Repeatability and reproducibility analysis investigated consistency by the same operator and between different operators. The repeatability of the manual measurements across the three cows, expressed as Coefficient of Variation (CV), ranged from 0.00% (Chest Height in cow 2 and Body Length in cow 3) to 4.03% (Chest Width in cow 1) (Table S1a). For digital measurements obtained using smartphone-based application and technology combinations, the CV ranged from 0.04% (Body Length with Recon-3D LiDAR by operator C) to 9.28% (Rump Length with Luma AI NeRF by operator A) (Table S1b). As expected, the reproducibility analysis demonstrated that manual measurements exhibited the highest accuracy and consistency (CV = 0.56–6.36%). Among the digital measurements, Recon-3D LiDAR exhibited the lowest variability, with CV values ranging from 1.36% to 8.73% across the five body traits evaluated (Table S1c).
Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) were calculated to provide a more informative evaluation of the different digital approaches proposed (Table 5). The lowest and most consistent RMSE and MAPE values were observed for morphometric measurements obtained using the Recon-3D and LiDAR combination (RMSE: 2.36–5.97; MAPE: 2.61–6.23), whereas higher RMSE and MAPE values emerged for measurements performed with KIRI Engine NSR (RMSE: 4.33–22.48; MAPE: 4.41–14.98).
Figure 13 shows the results of linear regression analysis and R2 values obtained comparing the manual and digital measurements for all tested combinations. All linear regression models showed high R2 values close to 1, ranging from 0.9490 to 0.9921 for KIRI Engine NSR and Recon-3D LiDAR, respectively.
Finally, Bland–Altman plots, including the bias, 95% confidence intervals (CI), and limits of agreement, are presented in Figure S2. They represent the degree of agreement between manual and digital measurements across the five technologies tested showing the following average differences: −0.05 cm, 2.52 cm, −10.24 cm, −12.15 cm, and 3.87 cm for Recon-3D LiDAR, Recon-3D Photogrammetry, KIRI Engine NSR KIRI Engine 3DGS and Luma AI NeRF, respectively.

4. Discussion

The present pilot trial aimed to assess the feasibility of a contactless approach for obtaining morphometric measurements in Marchigiana cattle by comparing LiDAR, photogrammetry, and AI models. The accuracy and reliability of the smartphone-based approach were evaluated through the degree of agreement between traditional and digital approaches using multiple statistical indicators. These included relative error accuracy, Pearson’s correlation, CV, RMSE, MAPE, linear regression, R2, and Bland–Altman analysis. The practical feasibility of handheld technology under field conditions, along with its accuracy and reliability, considering variability due to technological, biological, and user-dependent factors, are discussed in the following paragraphs.

4.1. Practical Feasibility of Handheld Technology Under Field Conditions

As widely accessible tools, smartphones offer the opportunity to distribute and utilize applications that can provide useful functionalities to a broad user base. The handheld nature of the smartphone avoids the necessity for specialized support, which was a prerequisite in previous studies [22,30,56]. The smartphone’s inherent capacity to connect to the internet eliminates the need for additional gateways or routers, promoting versatility and applicability of the proposed approach. Furthermore, recent technological advancements in 3D object reconstruction have enabled the integration of LiDAR, photogrammetry, and AI models directly into smartphones.
As described in Section 2, applying these technologies requires multiple scans of each animal to create a 3D reconstruction, which takes approximately 3 to 5 min per animal. While this duration is longer than manual measurements performed by a trained operator, the 3D model provides added value beyond simple linear features. Unlike traditional methods, 3D reconstructions allow the extraction of additional morphometric parameters, such as body surface area and volume [24]. The latter are not obtainable through manual approaches, but they may result from advanced 3D image-based applications, including body condition scoring, growth monitoring, and predictive modeling for health and productivity [57,58]. Once the 3D model is acquired, it can be stored and examined whenever needed, without repeatedly handling the animals. This reduces cattle stress and improves data consistency over time.
However, the quality of smartphone-based 3D scans is not constant and may vary depending on the animal breed and the conditions under which data are collected. Animals with longer coats, different hair colors or patterns, or highly variable body conformations can hinder anatomical landmark detection and the reconstruction of smooth, accurate surfaces, thereby increasing noise in point clouds and meshes [59]. Environmental factors also play a critical role: poor, uneven or overly intense lighting, along with strong shadows, can degrade 3D image quality and interfere with reliable landmark identification. In intensive livestock production systems, where animals are usually kept in controlled environments, it is generally easier to standardize lighting, animal positioning and camera distance, which tends to improve reconstruction accuracy [29]. By contrast, in extensive systems, lighting conditions and animal movement are more variable, often leading to stronger shadowing and motion-related artifacts which lead to greater variability in morphometric measurements.
Despite the licensing requirements for some tools employed in this pilot trial, they offer a substantial simplification of the scanning process, even for non-expert users, ensuring photorealistic rendering, suitable for a wide range of research applications. Therefore, the use of applications for 3D rendering of objects via smartphones makes this approach economical and user-friendly, also simplifying farm routine operations, reducing administrative burdens, and enhancing sustainability. This preliminary study, conducted on a small-medium-sized farm rearing Marchigiana cattle, demonstrates how user-friendly technologies can translate innovation into tangible benefits. Although the smartphone model used in the study is relatively expensive, rapid technological advancements are expected to make such devices more widely accessible in the near future.

4.2. Accuracy and Reliability of Handheld Approach for Biological Data Acquisition

4.2.1. Technological Sources of Variability

The manual measurements were consistent with the corresponding biometric data reported by ANABIC for cattle of the same age group included in the present study [10]. In contrast, measurements based on the 3D reconstruction of animal shapes showed variable performance across the five technologies tested (Table 3). Generally, digital measurements were higher than manual ones, reflecting trend observed in previous studies [24,31]. Among the traits analyzed, Body Length and Wither Height resulted yielded the most accurate measurements from smartphone-based 3D scanning (Table 4), as these dimensions resulted less affected by animal movements and reconstruction noise, as shown in Figure 12.
The results of mean relative error accuracy (Figure S1) suggested that Recon-3D LiDAR and Photogrammetry outperformed the other technologies, except for a large deviation in Chest Width, due to the Recon-3D Photogrammetry measurement of cow 2. The relative error accuracy results obtained in this preliminary study are consistent with those reported by Pérez-Ruiz et al. [54], who reported deviations ranging from −2.56% to 10.71% for the same morphometric measurements such as Body Length, Rump Length, and Wither Height. These findings suggest that the handheld scanning system used in this study yields result comparable to those obtained in other LiDAR based studies, even though many authors employed fixed-camera configurations. On the other hand, lower mean relative errors were documented by Huang et al. [22] and Runchay et al. [60] who reported deviations less than 2% for the same morphometric measurements. Peng et al. [16] documented mean relative errors ranging from 1.28% to 6.57%, proposing an automated approach using a depth camera to measure cattle dimensions. These discrepancies may be due to differences in acquisition protocols or data processing compared to those used in this proposed contactless approach.
Within the scope of this initial investigation, three complementary metrics, Pearson’s correlation coefficient (r), RMSE, and MAPE, were jointly used to ensure robust and comparable model evaluation. While r is commonly applied in computational biology for model selection [61], it does not account for prediction bias. RMSE, widely adopted in image-based prediction of cattle body weight [62,63,64], quantifies prediction error but is scale-dependent. Conversely, MAPE provides a scale-independent and easily interpretable measure of accuracy [65,66]. In this preliminary study, the best performance in terms of MAPE was achieved using the Recon-3D application, with most values below 5%, supporting the reliability of the method (Table 5). In contrast, the NSR, 3DGS, and NeRF algorithms produced MAPE values exceeding this threshold, indicating lower accuracy in morphometric measurements. According to the literature, MAPE values below 5.2% are generally considered acceptable for this type of analysis [62]. Therefore, although Recon-3D results were consistent with the standards reported in previous studies, the higher error rates observed with KIRI Engine and Luma AI applications highlight the need for further optimization, particularly concerning motion sensitivity and model reconstruction quality.
The linear regression analysis and corresponding R2 values demonstrated strong consistency between the variables considered (Figure 13). The angular coefficients indicated an almost proportional relationship between variables, while intercepts suggested slight variability in line origin. Most R2 values ranged from 0.9806 to 0.9921 across technologies, except for the KIRI Engine NSR, which showed a slightly lower R2 of 0.9490. These results indicate that over 94% of the observed variability was explained by the regression models. The present findings compared with the literature, the findings are consistent with those reported by Matsuura et al. [67] and Pérez-Ruiz et al. [54], who reported R2 values of 0.999 and 0.9723, respectively. This supports the potential of smartphone-based scanning approaches for accurately detecting morphometric measurements. Nonetheless, the slightly higher performance observed in this study may also reflect the influence of the limited sample size.
The Bland–Altman analysis confirmed the agreement between traditional and digital approaches (Figure S2). For all tested technologies, most data points fell within the limits of agreement, and no systematic bias was observed. These results indicate a good level of agreement between methods and support the reliability of the evaluated digital reconstruction techniques. The obtained plots were comparable to those reported by Matsuura et al. [67], further supporting the consistency of the present findings.
Overall, the discrepancy between manual and digital measurements observed in this pilot study can be broadly attributed to (1) algorithmic limitations, including reconstruction noise, point-cloud artifacts, and a high sensitivity to animal motion. For example, NSR and 3DGS methods showed higher errors because of less effective reconstruction under animal movement, while NeRF-based models reduced motion-related artifacts but still underperformed compared to LiDAR and photogrammetry; (2) Field applications limitations, such as difficulty in keeping animals stationary during scanning, variable lighting and shadows affecting quality of 3D reconstructions, and coat variability affecting landmark detection.
In practical farm environments, despite the overall feasibility of the tested tools, the application of 3D reconstruction methods faces several challenges arising from complex lighting conditions, the dynamic behavior of beef cattle, and constraints during data acquisition.

4.2.2. Biological Sources of Variability

In most cases, the digital approaches tested produced larger measurements than those obtained through traditional methods. This discrepancy may be attributed to noise and artifacts in the point clouds and meshes generated from smartphone scans. Particularly, these distortions often create overly sharp or low-contrast geometries with irregular edges, thereby complicating landmark identification. Although the filtering phase was applied to the 3D models, they remained affected by animal movement during scanning, thus resulting in noise being reconstructed directly on the 3D animal shapes. This was particularly evident in Chest measurements across most tested combinations and in cow 2 reconstructions obtained using the KIRI Engine application. Specifically, noise and artifacts were also the main sources of the lower performance of the KIRI Engine NSR method, suggesting that this technology faced greater reconstruction challenges under experimental conditions. Similar issues have been reported in other studies using point cloud reconstructions [31,54,60], suggesting that it may represent an intrinsic limitation of the technique. Conversely, Jing et al. [41] concluded that NeRF-based 3D data acquisition method effectively reduces the interference caused by livestock movement during data collection. In this preliminary study, the 3D reconstructions obtained by NeRF were the best among algorithm-based approaches, although they still presented some issues compared to those obtained with LiDAR and photogrammetry.

4.3. User-Dependent Sources of Error

The repeatability analysis showed low intra-operator variability, thus confirming the consistency of the measurement protocol (Table S1a,b). Each operator highlighted CVs below 4%, although with few exceptions, indicating promising results if confirmed by reproducibility analysis, as suggested by Fisher et al. [68]. Repeatability results were consistent with those reported by Le Cozler et al. [24]. In contrast, reproducibility analysis revealed greater inter-operator variability compared to values reported by Yang et al. [31], particularly for KIRI Engine NSR results (Table S1c). The lower reproducibility observed in this study may be attributed to the variability introduced by manual landmark placement, especially in less defined anatomical regions. These findings suggest that the tested approach was accurate and reliable under controlled conditions but it was sensitive to external factors such as operator differences, environmental conditions and animal variability during scanning procedure.

5. Challenges and Future Directions

The application of 3D reconstruction technologies in livestock farming still faces practical limitations that hinder routine adoption. Variable lighting, heterogeneous backgrounds, and the natural movement of cattle can introduce artifacts and reduce model accuracy, while stable postures are required to obtain reliable measurements. Current applications also involve substantial post-processing, registration, denoising, and mesh cleaning, which increase the time and expertise needed for data analysis. Nevertheless, ongoing developments are expected to markedly improve the applicability of these tools. Future advances will reduce processing time through faster sensors, more efficient point-cloud optimization algorithms, and expanded on-device computing capabilities. Improvements in real-time registration, noise reduction, and mesh reconstruction will limit manual corrections, whereas automated landmark detection via machine-learning algorithms will shorten post-processing. Together, these innovations will enable a near real-time generation of accurate 3D models, facilitating routine morphological assessments directly on farms. Moreover, the integration of these applications into long-term automated systems could be implemented in livestock farming facilities allowing the creation of a permanent digital archive of animal morphology.
LiDAR and NeRF-based methods offer new opportunities for integrating morphological data into precision livestock systems. High-resolution 3D models also have strong potential for objective Body Condition Scoring estimation, providing consistent indicators of welfare, nutritional status, and early signs of health disorders.
In breeding programs, 3D phenotypes could be incorporated into genomic evaluations to refine descriptors of conformation, thereby improving selection decisions and herd productivity. Furthermore, integrating 3D data into digital platforms and precision-agriculture expert systems may enhance decision-making, space optimization, and proactive herd management.

6. Conclusions

This preliminary study offers the first indication that affordable smartphone-equipped with optical cameras and LiDAR sensors may represent a promising approach for acquiring digital 3D models of beef cattle to measure body dimensions, although further validation is required. Among the tested applications, the Recon-3D LiDAR-based method provided the most accurate results, with measurements closely matching those obtained manually. Photogrammetry showed moderate accuracy, while AI-based models (NSR, 3DGS, NeRF) require further refinement to improve measurement accuracy. Overall, accuracy was influenced by animal posture and behavior during scanning, highlighting the need for improved acquisition protocols suitable for in-farming conditions. It is important to note that the extremely small sample size (three animals) represents a major limitation, and all findings should be interpreted with caution, as they may be substantially influenced by this constraint. Future validation with larger sample sizes and across multiple cattle breeds, age classes, and sex groups, will be necessary to strengthen the robustness of these findings and build a more comprehensive dataset covering a wider range of animal postures.
Despite this limitation, the tested technologies show promise as a cost-effective, contactless, and accessible solution for livestock morphological assessment, making them well-suited for small- and medium-sized farms. Integrating 3D morphological reconstruction technologies into farm workflows could support monitoring of growth, production, and finishing stages, thereby optimizing nutrition, health, welfare of animals, and overall farm efficiency. Furthermore, these tools could play a key role in genetic selection stages enabling the identification of superior breeding animals, allowing for an objective assessment of body conformation against established breed standards.
The adoption of low-cost portable 3D scanning technologies integrated with sensor-based imaging solutions aligns with the EU’s strategic goal of democratizing access to digital agricultural technologies, as outlined through Horizon Europe and the Common Agricultural Policy. Furthermore, it supports the integration of user-friendly tools into Precision Livestock Farming practices, thereby promoting more sustainable and competitive livestock production systems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agriculture15242567/s1, Figure S1. Comparison of the mean relative error (%) of the five morphometric measurements, grouped by application–technology combinations. Figure S2. Bland–Altman plots showing bias, 95% confidence intervals, and limits of agreement between manual and digital measurements across the five technologies. Table S1. Repeatability of manual measurements across the three cows for the five morphometric measurements (a); repeatability of digital measurements obtained using the smartphone-based application and technology combinations across the three cows and operators (b); and reproducibility among the manual and digital measurements for the five morphometric measurements (c).

Author Contributions

Conceptualization, M.F.T., M.P., E.M. and S.C. (Simone Ceccobelli); methodology, S.M., S.C. (Stefano Chiappini), E.M. and S.C. (Simone Ceccobelli); software, S.M., S.C. (Stefano Chiappini) and M.A.M.C.; formal analysis, S.M., S.C. (Stefano Chiappini) and M.A.M.C.; investigation, S.M., S.C. (Stefano Chiappini), M.A.M.C. and M.F.T.; resources, M.P. and S.C. (Simone Ceccobelli); data curation, S.M., S.C. (Stefano Chiappini), M.A.M.C. and E.M.; writing—original draft preparation, S.M., S.C. (Stefano Chiappini), M.A.M.C. and M.F.T.; writing—review and editing, M.F.T., G.E., M.P., E.M. and S.C. (Simone Ceccobelli); supervision, M.F.T., M.P., E.M. and S.C. (Simone Ceccobelli); funding acquisition, M.P., E.M. and S.C. (Simone Ceccobelli). All authors have read and agreed to the published version of the manuscript.

Funding

This study was carried out within (i) the Agritech National Research Center, supported by funding from the European Union Next-GenerationEU (PIANO NAZIONALE DI RIPRESA E RESILIENZA (PNRR)—MISSIONE 4 COMPONENTE 2, INVESTIMENTO 1.4—D.D. 1032 17/06/2022, CN00000022), and (ii) the “Bando Habitat 2022—Gestione sostenibile delle praterie secondarie per la conservazione della biodiversità vegetale e animale e la valorizzazione dei servizi ecosistemici connessi”, funded by Fondazione Cariverona. This manuscript reflects only the authors’ views and opinions, neither the European Union nor the European Commission can be considered responsible for them.

Institutional Review Board Statement

The experimental procedures were designed and reviewed by the Animal Experimentation Committee of the Università Politecnica delle Marche (Organization for Animal Welfare—OPBA, 3 April 2024). Animal care and handling followed Italian regulations on the protection of animals used for experimental and other scientific purposes (D.M. 116/1992), as well as European Community regulations (O.J. of E.C. L 358/1, 12/18/1986), and were fully compliant with the requirements of the Italian Legislative Decree No. 26/2014 and subsequent guidelines issued by the Italian Ministry of Health on 16 March 2015. All animal experiments were conducted in compliance with EU Directive 2010/63 and the ARRIVE guidelines.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors are grateful to “Azienda Agricola Martina Salciccia” and “Azienda Agricola Ferretti” for their invaluable assistance and for granting access to the facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree Dimensional
3DGSThree-Dimensional Gaussian Splatting
AIArtificial Intelligence
AKISAgricultural Knowledge and Innovation System
ANABICNational Association of Italian Beef Cattle Breeders
BLBody Length
CHChest Height
CWChest Width
DLDeep Learning
LiDARLight Detection and Ranging
NeRFNeural Radiance Fields
NSRNeural Surface Reconstruction
PLFPrecision Livestock Farming
PLSPersonal Laser Scanning
RGBRed, Green, Blue
RLRump Length
SCARStanding Committee on Agricultural Research
WHWither Height

References

  1. Terry, S.A.; Basarab, J.A.; Guan, L.L.; McAllister, T.A. Strategies to improve the efficiency of beef cattle production. Can. J. Anim. Sci. 2020, 101, 1. [Google Scholar] [CrossRef]
  2. Henchion, M.; Hayes, M.; Mullen, A.M.; Fenelon, M.; Tiwari, B. Future protein supply and demand: Strategies and factors influencing a sustainable equilibrium. Foods 2017, 6, 53. [Google Scholar] [CrossRef]
  3. Neethirajan, S. The role of sensors, big data and machine learning in modern animal farming. Sens. Bio-Sens. Res. 2020, 29, 100367. [Google Scholar] [CrossRef]
  4. Alvarez, J.R.; Arroqui, M.; Mangudo, P.; Toloza, J.; Jatip, D.; Rodríguez, J.M.; Teyseyre, A.; Sanz, C.; Zunino, A.; Machado, C. Body condition estimation on cows from depth images using Convolutional Neural Networks. Comput. Electron. Agric. 2018, 155, 12–22. [Google Scholar] [CrossRef]
  5. European Commission. Speech by President von der Leyen at the European Agri-Food Days via Video Message. 2024. Available online: https://ec.europa.eu/commission/presscorner/detail/en/speech_24_6323 (accessed on 6 September 2025).
  6. Kountios, G.; Kanakaris, S.; Moulogianni, C.; Bournaris, T. Strengthening AKIS for Sustainable Agricultural Features: Insights and Innovations from the European Union: A Literature Review. Sustainability 2024, 16, 7068. [Google Scholar] [CrossRef]
  7. European Commission. Fostering an Effective and Integrated AKIS in Member States. 2024. Available online: https://eu-cap-network.ec.europa.eu/sites/default/files/publications/2024-12/eu-cap-network-event-report-seminar-akis.pdf (accessed on 6 September 2025).
  8. European Commission. EU Agri-Food Days 2024. 2024. Available online: https://agriculture.ec.europa.eu/overview-vision-agriculture-food/digitalisation_en (accessed on 6 September 2025).
  9. Colombi, D.; Rovelli, G.; Luigi-Sierra, M.G.; Ceccobelli, S.; Guan, D.; Perini, F.; Sbarra, F.; Quaglia, A.; Sarti, F.M.; Pasquini, M.; et al. Population structure and identification of genomic regions associated with productive traits in five Italian beef cattle breeds. Sci. Rep. 2024, 14, 8529. [Google Scholar] [CrossRef]
  10. ANABIC. Associazione Nazionale Allevatori Bovini Italiani da Carne. Available online: https://www.anabic.it (accessed on 6 September 2025).
  11. Kenny, D.A.; Fitzsimons, C.; Waters, S.M.; McGee, M. Invited review: Improving feed efficiency of beef cattle—The current state of the art and future challenges. Animal 2018, 12, 1815–1826. [Google Scholar] [CrossRef] [PubMed]
  12. Dingwell, R.T.; Wallace, M.M.; McLaren, C.J.; Leslie, C.F.; Leslie, K.E. An evaluation of two indirect methods of estimating body weight in Holstein calves and heifers. J. Dairy Sci. 2006, 89, 3992–3998. [Google Scholar] [CrossRef]
  13. Ouédraogo, D.; Soudré, A.; Ouédraogo-Koné, S.; Zoma, B.L.; Yougbaré, B.; Khayatzadeh, N.; Burger, P.A.; Mészáros, G.; Traoré, A.; Mwai, O.A.; et al. Breeding objectives and practices in three local cattle breed production systems in Burkina Faso with implication for the design of breeding programs. Livest. Sci. 2020, 232, 103910. [Google Scholar] [CrossRef]
  14. Mehtiö, T.; Pitkänen, T.; Leino, A.M.; Mäntysaari, E.A.; Kempe, R.; Negussie, E.; Lidauer, M.H. Genetic analyses of metabolic body weight, carcass weight and body conformation traits in Nordic dairy cattle. Animal 2021, 15, 100398. [Google Scholar] [CrossRef]
  15. Petherick, J.C.; Doogan, V.J.; Venus, B.K.; Holroyd, R.G.; Olsson, P. Quality of handling and holding yard environment, and beef cattle temperament: 2, Consequences for stress and productivity. Appl. Anim. Behav. Sci. 2009, 120, 28–38. [Google Scholar] [CrossRef]
  16. Peng, C.; Cao, S.; Li, S.; Bai, T.; Zhao, Z.; Sun, W. Automated measurement of cattle dimensions using improved keypoint detection combined with unilateral depth imaging. Animals 2024, 14, 2453. [Google Scholar] [CrossRef]
  17. Gaudioso, V.; Sanz-Ablanedo, E.; Lomillos, J.M.; Alonso, M.E.; Javares-Morillo, L.; Rodríguez, P. “Photozoometer”: A new photogrammetric system for obtaining morphometric measurements of elusive animals. Livest. Sci. 2014, 165, 147–156. [Google Scholar] [CrossRef]
  18. Cominotte, A.; Fernandes, A.F.A.; Dorea, J.R.R.; Rosa, G.J.M.; Ladeira, M.M.; Van Cleef, E.; Pereira, G.L.; Baldassini, W.A.; Neto, O.R.M. Automated computer vision system to predict body weight and average daily gain in beef cattle during growing and finishing phases. Livest. Sci. 2020, 232, 103904. [Google Scholar] [CrossRef]
  19. Imaz, J.A.; Garcia, S.; González, L.A. Using automated in-paddock weighing to evaluate the impact of intervals between liveweight measures on growth rate calculations in grazing beef cattle. Comput. Electron. Agric. 2020, 178, 105729. [Google Scholar] [CrossRef]
  20. Qiao, Y.; Kong, H.; Clark, C.; Lomax, S.; Su, D.; Eiffert, S.; Sukkarieh, S. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation. Comput. Electron. Agric. 2021, 185, 106143. [Google Scholar] [CrossRef]
  21. Thapar, G.; Biswas, T.K.; Bhushan, B.; Naskar, S.; Kumar, A.; Dandapat, P.; Rokhade, J. Accurate estimation of body weight of pigs through smartphone image measurement app. Smart Agric. Technol. 2023, 4, 100194. [Google Scholar] [CrossRef]
  22. Huang, L.; Li, S.; Zhu, A.; Fan, X.; Zhang, C.; Wang, H. Non-contact body measurement for qinchuan cattle with LiDAR sensor. Sensors 2018, 18, 3014. [Google Scholar] [CrossRef]
  23. Huang, L.; Guo, H.; Rao, Q.; Hou, Z.; Li, S.; Qiu, S.; Fan, X.; Wang, H. Body dimension measurements of qinchuan cattle with transfer learning from liDAR sensing. Sensors 2019, 19, 5046. [Google Scholar] [CrossRef] [PubMed]
  24. Le Cozler, Y.; Allain, C.; Caillot, A.; Delouard, J.M.; Delattre, L.; Luginbuhl, T.; Faverdin, P. High-precision scanning system for complete 3D cow body shape imaging and analysis of morphological traits. Comput. Electron. Agric. 2019, 157, 447–453. [Google Scholar] [CrossRef]
  25. Nilchuen, P.; Yaigate, T.; Sumon, W. Body measurements of beef cows by using mobile phone application and prediction of body weight with regression model. Songklanakarin J. Sci. Technol. 2021, 43, 1635–1640. [Google Scholar] [CrossRef]
  26. Li, J.; Ma, W.; Li, Q.; Zhao, C.; Tulpan, D.; Yang, S.; Ding, L.; Gao, R.; Yu, L.; Wang, Z. Multi-view real-time acquisition and 3D reconstruction of point clouds for beef cattle. Comput. Electron. Agric. 2022, 197, 106987. [Google Scholar] [CrossRef]
  27. Bao, Y.; Lu, H.; Wu, J.; Lei, J.; Zhang, J.; Luo, X.; Guo, H. Rapid and Automated Body Measurement of Cattle Based on Statistical Shape Model. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2023, 10, 541–546. [Google Scholar] [CrossRef]
  28. Lu, H.; Zhang, J.; Yuan, X.; Lv, J.; Zeng, Z.; Guo, H.; Ruchay, A. Automatic coarse-to-fine method for cattle body measurement based on improved GCN and 3D parametric model. Comput. Electron. Agric. 2025, 231, 110017. [Google Scholar] [CrossRef]
  29. Yukun, S.; Pengju, H.; Yujie, W.; Ziqi, C.; Yang, L.; Baisheng, D.; Runze, L.; Yonggen, Z. Automatic monitoring system for individual dairy cows based on a deep learning framework that provides identification via body parts and estimation of body condition score. J. Dairy Sci. 2019, 102, 10140–10151. [Google Scholar] [CrossRef] [PubMed]
  30. Zhao, K.; Zhang, M.; Shen, W.; Liu, X.; Ji, J.; Dai, B.; Zhang, R. Automatic body condition scoring for dairy cows based on efficient net and convex hull features of point clouds. Comput. Electron. Agric. 2023, 205, 107588. [Google Scholar] [CrossRef]
  31. Yang, G.; Xu, X.; Song, L.; Zhang, Q.; Duan, Y.; Song, H. Automated measurement of dairy cows body size via 3D point cloud data analysis. Comput. Electron. Agric. 2022, 200, 107218. [Google Scholar] [CrossRef]
  32. Hou, Z.; Huang, L.; Zhang, Q.; Miao, Y. Body weight estimation of beef cattle with 3D deep learning model: PointNet++. Comput. Electron. Agric. 2023, 213, 108184. [Google Scholar] [CrossRef]
  33. Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 2023, 42, 1–14. [Google Scholar] [CrossRef]
  34. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
  35. Nilchuen, P.; Suwanasopee, T.; Koonawootrittriron, S. Integrating deep learning and mobile imaging for assessment of automated conformational indices and weight prediction in Brahman cattle. Smart Agr. Technol. 2025, 12, 101079. [Google Scholar] [CrossRef]
  36. Luo, X.; Hu, Y.; Gao, Z.; Guo, H.; Su, Y. Automated measurement of livestock body based on pose normalisation using statistical shape model. Biosyst. Eng. 2023, 227, 36–51. [Google Scholar] [CrossRef]
  37. Chen, X.; Guo, X.; Li, Y.; Liu, C. A Lightweight Automatic Cattle Body Measurement Method Based on Keypoint Detection. Symmetry 2025, 17, 1926. [Google Scholar] [CrossRef]
  38. Recon-3d. 2023. Available online: https://www.recon-3d.com (accessed on 6 February 2025).
  39. KIRI Engine. 2023. Available online: https://www.kiriengine.com (accessed on 6 February 2025).
  40. Luma AI Inc. 2023. Available online: https://www.lumalabs.ai (accessed on 6 February 2025).
  41. Jing, X.; Wu, T.; Shen, P.; Chen, Z.; Jia, H.; Song, H. In situ volume measurement of dairy cattle via neural radiance fields-based 3D reconstruction. Biosyst. Eng. 2025, 250, 105–116. [Google Scholar] [CrossRef]
  42. Chase, C.E.; Liscio, E. Validation of Recon-3D, iPhone LiDAR for bullet trajectory documentation. Forensic Sci. Int. 2023, 350, 111787. [Google Scholar] [CrossRef]
  43. Tavani, S.; Billi, A.; Corradetti, A.; Mercuri, M.; Bosman, A.; Cuffaro, M.; Seers, T.; Carminati, E. Smartphone assisted fieldwork: Towards the digital transition of geoscience fieldwork using LiDAR-equipped iPhones. Earth Sci. Rev. 2022, 227, 103969. [Google Scholar] [CrossRef]
  44. Kottner, S.; Thali, M.J.; Gascho, D. Using the iPhone’s LiDAR technology to capture 3D forensic data at crime and crash scenes. Forensic Imag. 2023, 32, 200535. [Google Scholar] [CrossRef]
  45. Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; Wang, W. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. Adv. Neural Inf. Process. Syst. 2021, 34, 27171–27183. [Google Scholar] [CrossRef]
  46. Xu, Q.; Xu, Z.; Philip, J.; Bi, S.; Shu, Z.; Sunkavalli, K.; Neumann, U. Point-nerf: Point-based neural radiance fields. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 5438–5448. [Google Scholar] [CrossRef]
  47. Chiappini, S.; Marcheggiani, E.; Pierdicca, R.; Choudhury, M.A. Assessment of 3D Models of Rural Buildings Using UAV Images: A Comparison of NeRF, GS and MVS-SFM Methods. In Proceedings of the Biosystems Engineering Promoting Resilience to Climate Change-AIIA 2024-Mid-Term Conference, Padova, Italy, 17–19 June 2024; Springer Nature: Cham, Switzerland, 2024; pp. 1190–1197. [Google Scholar] [CrossRef]
  48. Wang, Y.; Zhou, K.; Zhang, W.; Xiao, C. MegaSurf: Scalable Large Scene Neural Surface Reconstruction. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, Australia, 28 October–1 November 2024; pp. 6414–6423. [Google Scholar] [CrossRef]
  49. Fu, Q.; Xu, Q.; Ong, Y.S.; Tao, W. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction. Adv. Neural Inf. Process. Syst. 2022, 35, 3403–3416. [Google Scholar] [CrossRef]
  50. Sato, Y.; Yaguchi, Y. RapidSim: Enhancing Robotic Simulation with Photorealistic 3D Environments via Smartphone-Captured NeRF and UE5 Integration. In Proceedings of the International Conference on Image Processing and Robotics (ICIPRoB), Colombo, Sri Lanka, 9–10 March 2024; pp. 1–6. [Google Scholar] [CrossRef]
  51. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  52. Saif, W.; Alshibani, A. Smartphone-Based Photogrammetry Assessment in Comparison with a Compact Camera for Construction Management Applications. Appl. Sci. 2022, 12, 1053. [Google Scholar] [CrossRef]
  53. CloudCompare. CloudCompare V2.13.1. 2024. Available online: https://www.cloudcompare.org/release/notes/20240320/ (accessed on 6 September 2025).
  54. Pérez-Ruiz, M.; Tarrat-Martín, D.; Sánchez-Guerrero, M.J.; Valera, M. Advances in horse morphometric measurements using LiDAR. Comput. Electron. Agric. 2020, 174, 105510. [Google Scholar] [CrossRef]
  55. R Studio. R Studio v 2025.09.1+401. 2025. Available online: https://dailies.rstudio.com/version/2025.09.1+401 (accessed on 6 September 2025).
  56. Dang, C.G.; Lee, S.S.; Alam, M.; Lee, S.M.; Park, M.N.; Seong, H.S.; Han, S.; Nguyen, H.P.; Baek, M.K.; Lee, J.G.; et al. Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment. Sensors 2024, 24, 427. [Google Scholar] [CrossRef] [PubMed]
  57. Summerfield, G.I.; De Freitas, A.; van Marle-Koster, E.; Myburgh, H.C. Automated cow body condition scoring using multiple 3D cameras and convolutional neural networks. Sensors 2023, 23, 9051. [Google Scholar] [CrossRef] [PubMed]
  58. Xiong, Y.; Condotta, I.C.; Musgrave, J.A.; Brown-Brandl, T.M.; Mulliniks, J.T. Estimating body weight and body condition score of mature beef cows using depth images. Transl. Anim. Sci. 2023, 7, txad085. [Google Scholar] [CrossRef]
  59. Sun, Y.; Li, Q.; Ma, W.; Li, M.; Torre, A.D.L.; Yang, S.X.; Zhao, C. A Multi-View Real-Time Approach for Rapid Point Cloud Acquisition and Reconstruction in Goats. Agriculture 2024, 14, 1785. [Google Scholar] [CrossRef]
  60. Ruchay, A.; Kober, V.; Dorofeev, K.; Kolpakov, V.; Miroshnikov, S. Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery. Comput. Electron. Agric. 2020, 179, 105821. [Google Scholar] [CrossRef]
  61. González-Recio, O.; Rosa, G.J.M.; Gianola, D. Machine learning methods and predictive ability metrics for genome-wide prediction of complex traits. Livest. Sci. 2014, 166, 217–231. [Google Scholar] [CrossRef]
  62. Song, X.; Bokkers, E.A.M.; van der Tol, P.P.J.; Koerkamp, P.W.G.G.; van Mourik, S. Automated body weight prediction of dairy cows using 3-dimensional vision. J. Dairy Sci. 2018, 101, 4448–4459. [Google Scholar] [CrossRef]
  63. Jang, D.H.; Kim, C.; Ko, Y.G.; Kim, H.Y. Estimation of body weight for Korean cattle using three-dimensional image. J. Biosyst. Eng. 2020, 45, 325–332. [Google Scholar] [CrossRef]
  64. Weber, V.A.M.; de Lima, W.F.; da Silva, O.A.; Astolfi, G.; Menezes, G.V.; de Andrade Porto, J.V.; Rezende, F.P.C.; Moraes, P.H.; Matsubara, E.T.; Mateus, R.; et al. Cattle weight estimation using active contour models and regression trees Bagging. Comput. Electron. Agric. 2020, 179, 105804. [Google Scholar] [CrossRef]
  65. Byrne, R.F. Beyond traditional time-series: Using demand sensing to improve forecasts in volatile times. J. Bus. Forecast. 2012, 31, 13–19. [Google Scholar]
  66. Kim, S.; Kim, H. A new metric of absolute percentage error for intermittent demand forecasts. Int. J. Forecast. 2016, 32, 669–679. [Google Scholar] [CrossRef]
  67. Matsuura, A.; Torii, S.; Ojima, Y.; Kiku, Y. 3D imaging and body measurement of riding horses using four scanners simultaneously. J. Equine Sci. 2024, 35, 1–7. [Google Scholar] [CrossRef] [PubMed]
  68. Fischer, A.; Luginbuhl, T.; Delattre, L.; Delouard, J.M.; Faverdin, P. Rear shape in 3 dimensions summarized by principal component analysis is a good predictor of body condition score in Holstein dairy cows. J. Dairy Sci. 2015, 98, 4465–4476. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Graphical depiction of the recorded morphometric measurements in Marchigiana cow: Body Length (1), Chest Height (2), Chest Width (3), Rump Length (4), and Wither Height (5).
Figure 1. Graphical depiction of the recorded morphometric measurements in Marchigiana cow: Body Length (1), Chest Height (2), Chest Width (3), Rump Length (4), and Wither Height (5).
Agriculture 15 02567 g001
Figure 2. (A) iPhone® 14 Pro Max, rear view; (B) Rear camera cluster equipped with an advanced 48 MP camera system that features three distinct cameras: a 3× telephoto, a wide, and an ultra-wide camera alongside a flash. In this cluster, the added LiDAR camera emits a pulsed infrared pattern that appears to be made up of a 12 × 12 dot matrix, which is repeated at four different positions.
Figure 2. (A) iPhone® 14 Pro Max, rear view; (B) Rear camera cluster equipped with an advanced 48 MP camera system that features three distinct cameras: a 3× telephoto, a wide, and an ultra-wide camera alongside a flash. In this cluster, the added LiDAR camera emits a pulsed infrared pattern that appears to be made up of a 12 × 12 dot matrix, which is repeated at four different positions.
Agriculture 15 02567 g002
Figure 3. Scanning workflow and applications used to obtain 3D models on which to derive body dimensions of cattle.
Figure 3. Scanning workflow and applications used to obtain 3D models on which to derive body dimensions of cattle.
Agriculture 15 02567 g003
Figure 4. Details of the checkerboard markers distributed in the acquisition area (red circles).
Figure 4. Details of the checkerboard markers distributed in the acquisition area (red circles).
Agriculture 15 02567 g004
Figure 5. Personal Laser Scanning performed around the animal, starting near the head and performing a full 360° turn to return to the starting position.
Figure 5. Personal Laser Scanning performed around the animal, starting near the head and performing a full 360° turn to return to the starting position.
Agriculture 15 02567 g005
Figure 6. Comparison of 3D models generated using two different reconstruction methods with the Recon-3D application: (A) 3D model produced using LiDAR-based scanning (Recon-3D LiDAR); (B) 3D model produced using Photogrammetry-based scanning (Recon-3D Photogrammetry).
Figure 6. Comparison of 3D models generated using two different reconstruction methods with the Recon-3D application: (A) 3D model produced using LiDAR-based scanning (Recon-3D LiDAR); (B) 3D model produced using Photogrammetry-based scanning (Recon-3D Photogrammetry).
Agriculture 15 02567 g006
Figure 7. Comparison of 3D models generated using two different reconstruction methods with the KIRI Engine application: (A) 3D model produced using the Neural Surface Reconstruction-based scanning (KIRI Engine NSR); (B) 3D model produced using the 3D Gaussian Splatting-based scanning (KIRI Engine 3DGS).
Figure 7. Comparison of 3D models generated using two different reconstruction methods with the KIRI Engine application: (A) 3D model produced using the Neural Surface Reconstruction-based scanning (KIRI Engine NSR); (B) 3D model produced using the 3D Gaussian Splatting-based scanning (KIRI Engine 3DGS).
Agriculture 15 02567 g007
Figure 8. The 3D model reconstruction generated using Luma AI application with the Neural Radiance Fields-based scanning (Luma AI NeRF).
Figure 8. The 3D model reconstruction generated using Luma AI application with the Neural Radiance Fields-based scanning (Luma AI NeRF).
Agriculture 15 02567 g008
Figure 9. Cross-section view used for the extraction of the morphometric measurements in digital reconstructed Marchigiana cow.
Figure 9. Cross-section view used for the extraction of the morphometric measurements in digital reconstructed Marchigiana cow.
Agriculture 15 02567 g009
Figure 10. Body Length (Y axis) (A), Rump Length (Y axis) (B), and Chest Width (X axis) (C) measurements in digital reconstructed Marchigiana cow obtained from the Recon-3D LiDAR model.
Figure 10. Body Length (Y axis) (A), Rump Length (Y axis) (B), and Chest Width (X axis) (C) measurements in digital reconstructed Marchigiana cow obtained from the Recon-3D LiDAR model.
Agriculture 15 02567 g010
Figure 11. Wither (A) and Chest (B) Height measurements (Z axes) in digital reconstructed Marchigiana cow obtained from the Recon-3D LiDAR model.
Figure 11. Wither (A) and Chest (B) Height measurements (Z axes) in digital reconstructed Marchigiana cow obtained from the Recon-3D LiDAR model.
Agriculture 15 02567 g011
Figure 12. Comparison of mean relative errors (%) accuracy of the five morphometric measurements across smartphone applications and technologies tested.
Figure 12. Comparison of mean relative errors (%) accuracy of the five morphometric measurements across smartphone applications and technologies tested.
Agriculture 15 02567 g012
Figure 13. Linear regression analysis and R2 results for Recon-3D LiDAR (A), Recon-3D Photogrammetry (B), KIRI Engine with Neural Surface Reconstruction (NSR) (C), KIRI Engine with 3D Gaussian Splatting (3DGS) (D), and Luma AI based on Neural Radiance Fields (NeRF) (E). The red line indicates the best-fit regression line, while the shaded blue area represents the 95% confidence interval.
Figure 13. Linear regression analysis and R2 results for Recon-3D LiDAR (A), Recon-3D Photogrammetry (B), KIRI Engine with Neural Surface Reconstruction (NSR) (C), KIRI Engine with 3D Gaussian Splatting (3DGS) (D), and Luma AI based on Neural Radiance Fields (NeRF) (E). The red line indicates the best-fit regression line, while the shaded blue area represents the 95% confidence interval.
Agriculture 15 02567 g013
Table 1. Features of the three applications used.
Table 1. Features of the three applications used.
NameLicensePricePoint CloudMeshFile FormatVersion
Recon-3DFree/By charge75 $/monthYesYesE57; and others1.9
KIRI EngineFree/By charge6.66 $/monthNoYes.las; .obj; and others3.13
Luma AIFree/By charge7.99 $/monthYesNo.ply; and others1.0
Table 2. Details on applications and reconstruction techniques used; comparison between raw and cleaned point clouds from the three cows’ scans.
Table 2. Details on applications and reconstruction techniques used; comparison between raw and cleaned point clouds from the three cows’ scans.
CowApplicationReconstruction
Technique
Data
Type
Raw Number
of Points
Number of Points After Registration and Cleaning
1Recon-3DLiDARPoint cloud607,79748,663
Recon-3DPhotogrammetryPoint cloud895,267104,681
KIRI EngineNSRMesh52,80052,800
KIRI Engine3DGSMesh505,455119,712
Luma AINeRFPoint cloud2,085,075507,570
2Recon-3DLiDARPoint cloud224,67147,779
Recon-3DPhotogrammetryPoint cloud809,51925,158
KIRI EngineNSRMesh58,94057,543
KIRI Engine3DGSMesh405,825114,538
Luma AINeRFPoint cloud2,062,468147,983
3Recon-3DLiDARPoint cloud1,446,441197,414
Recon-3DPhotogrammetryPoint cloud540,77233,681
KIRI EngineNSRMesh69,56742,692
KIRI Engine3DGSMesh426,957108,988
Luma AINeRFPoint cloud2,050,007270,635
Table 3. Results of manual and digital measurements, including the relative error (r.e.) accuracy of each cow and technology.
Table 3. Results of manual and digital measurements, including the relative error (r.e.) accuracy of each cow and technology.
Morphometric
Measurement
CowManual
Measure (cm)
Recon-3D
LiDAR
Recon-3D
Photogrammetry
KIRI Engine
NSR
KIRI Engine
3DGS
Luma AI
NeRF
Measure
(cm)
r.e.
(%)
Measure
(cm)
r.e.
(%)
Measure
(cm)
r.e.
(%)
Measure
(cm)
r.e.
(%)
Measure
(cm)
r.e.
(%)
Body Length1175.00172.221.59172.071.67164.765.85168.843.52177.92−1.67
2172.50182.42−5.75167.253.04209.74−21.59176.33−2.22184.51−6.96
3164.00164.82−0.50177.66−8.33168.93−3.01177.05−7.96164.18−0.11
Chest Height176.6783.86−9.3881.44−6.2279.88−4.1979.69−3.9482.99−8.24
275.0076.46−1.9577.08−2.7798.48−31.3182.22−9.6384.98−13.31
375.0073.242.3577.98−3.9769.037.9677.48−3.3176.66−2.21
Chest Width157.3357.73−0.7061.49−7.2660.19−4.9957.45−0.2151.639.94
256.6760.72−7.1578.77−39.0072.66−28.2264.99−14.6855.262.49
350.5050.110.7749.202.5744.5811.7257.26−13.3952.80−4.55
Rump Length156.6749.7812.1652.547.2953.735.1944.5221.4456.95−0.49
256.0053.983.6157.51−2.7061.05−9.0250.2010.3654.772.20
349.6751.13−2.9453.26−7.2354.37−9.4656.54−13.8350.02−0.70
Wither Height1149.33144.803.03145.772.38150.66−0.89149.200.09159.47−6.79
2149.67147.441.49151.07−0.94167.71−12.05152.05−1.59169.51−13.26
3148.67143.253.65147.460.81148.260.28150.94−1.53149.12−0.30
Table 4. Mean (µ r.e.) and standard deviation (σ r.e.) of relative error accuracy and Pearson correlation (r) for Recon-3D applications (LiDAR/Photogrammetry), KIRI Engine applications (NSR/3DGS), and Luma AI (NeRF).
Table 4. Mean (µ r.e.) and standard deviation (σ r.e.) of relative error accuracy and Pearson correlation (r) for Recon-3D applications (LiDAR/Photogrammetry), KIRI Engine applications (NSR/3DGS), and Luma AI (NeRF).
Morphometric MeasurementRecon-3D
LiDAR
Recon-3D
Photogrammetry
KIRI Engine
NSR
KIRI Engine
3DGS
Luma AI
NeRF
µ r.e. (%)σ r.e. (%)rµ r.e. (%)σ r.e. (%)rµ r.e. (%)σ r.e. (%)rµ r.e. (%)σ r.e. (%)rµ r.e. (%)σ r.e. (%)r
Body Length−1.553.780.67−1.216.21−0.77−6.2514.000.22−2.225.74−0.73−2.913.590.86
Chest Height−2.995.930.96−4.321.750.98−9.1820.10−0.15−5.633.48−0.04−7.925.560.29
Chest Width−2.364.210.93−14.5621.730.76−7.1620.060.85−9.438.010.442.637.250.11
Rump Length4.287.570.12−0.887.430.30−4.438.330.355.9918.04−0.920.341.620.97
Wither Height2.721.110.950.751.660.52−4.226.810.83−1.010.950.21−6.786.480.98
Table 5. RMSE and MAPE results for Recon-3D applications (LiDAR/Photogrammetry), KIRI Engine applications (NSR/3DGS), and Luma AI (NeRF).
Table 5. RMSE and MAPE results for Recon-3D applications (LiDAR/Photogrammetry), KIRI Engine applications (NSR/3DGS), and Luma AI (NeRF).
Morphometric
Measurement
Recon-3D
LiDAR
Recon-3D
Photogrammetry
KIRI Engine
NSR
KIRI Engine
3DGS
Luma AI
NeRF
RMSEMAPERMSEMAPERMSEMAPERMSEMAPERMSEMAPE
Body Length5.972.618.624.3522.4810.158.624.577.142.91
Chest Height4.364.563.464.3214.1114.484.745.626.897.92
Chest Width2.362.8713.0116.289.9814.986.199.433.645.66
Rump Length4.236.233.285.744.337.898.7315.210.751.13
Wither Height4.282.722.321.3810.454.411.91.0712.876.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marchegiani, S.; Chiappini, S.; Choudhury, M.A.M.; E, G.; Trombetta, M.F.; Pasquini, M.; Marcheggiani, E.; Ceccobelli, S. Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models. Agriculture 2025, 15, 2567. https://doi.org/10.3390/agriculture15242567

AMA Style

Marchegiani S, Chiappini S, Choudhury MAM, E G, Trombetta MF, Pasquini M, Marcheggiani E, Ceccobelli S. Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models. Agriculture. 2025; 15(24):2567. https://doi.org/10.3390/agriculture15242567

Chicago/Turabian Style

Marchegiani, Sara, Stefano Chiappini, Md Abdul Mueed Choudhury, Guangxin E, Maria Federica Trombetta, Marina Pasquini, Ernesto Marcheggiani, and Simone Ceccobelli. 2025. "Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models" Agriculture 15, no. 24: 2567. https://doi.org/10.3390/agriculture15242567

APA Style

Marchegiani, S., Chiappini, S., Choudhury, M. A. M., E, G., Trombetta, M. F., Pasquini, M., Marcheggiani, E., & Ceccobelli, S. (2025). Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models. Agriculture, 15(24), 2567. https://doi.org/10.3390/agriculture15242567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop