Next Article in Journal
Differential Evolution-Based Optimization of Hybrid PV–Wind Energy Using Reanalysis Data
Previous Article in Journal
Chestnut and Grapevine By-Products: Bioactivity, Biotransformation, and Nutraceutical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Use of Computer Vision Methodologies to Estimate the Volume of Powdered Substance Shapes

1
Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia
2
Intens doo, Bulevar Evrope 28, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 2053; https://doi.org/10.3390/app16042053
Submission received: 21 January 2026 / Revised: 9 February 2026 / Accepted: 14 February 2026 / Published: 19 February 2026

Abstract

Many compressed air devices are energy inefficient. One example is using air nozzles above pastry lines to remove flour and cool products. These nozzles consume excessive energy, particularly when mounted too high, requiring stronger airflow. Adjustable nozzle height and energy-efficient nozzles should be used with careful control of air pressure, flow rate, and activation time, ensuring efficient and adaptive control. Additionally, sensor-based control should activate airflow only when pastries are present and until the correct amount of powder material has been blown out, as the nozzles often operate unnecessarily. Accurate measurement of powder volume after blow-off remains a challenge. With the use of computer vision methodology, the system would continuously read the measured values and determine not only the optimal moment to interrupt device operation but also dynamically adjust key parameters. This paper demonstrates that computer vision can estimate powder volume using two non-contact 3D methods: a depth camera, and a structured light scanner. Their accuracy, reliability, advantages, and limitations are analyzed. The results show that the structured light scanner can be used in the case of a static model (the conveyor belt with products stops at the moment when it is necessary to perform a 3D measurement). This approach shows higher repeatability and gives a more accurate 3D model. On the other hand, for the dynamic model (the conveyor belt with products moves while the 3D measurement device is fixed), the depth camera can be used because, at minimum rotation speeds of the substrate, it shows higher accuracy and enables faster adaptive modeling and creation of the necessary data.

1. Introduction

Powdered materials are widely present in modern industry and research, from food and pharmaceutical to metallurgical and additive manufacturing [1,2]. Their properties significantly affect the flow and efficiency of processes such as transport, mixing, pressing, classification, baking, drying or storage [3]. For this reason, the properties of powder materials are being investigated in order to obtain reliability and repeatability of the final product [4]. The characteristics of powder materials directly affect their flow through the production system, and in turn, the consistency and quality of the final product [5]. In order to be able to monitor and compare these properties, appropriate databases for these data have been developed [6].
Two important characteristics of powdered materials, which even the user can measure with a certain accuracy, are mass and volume. Accurate measurement of the volume of powdered materials is of crucial importance in the food industry, especially in dosing, packaging, blowing excess powder and quality control processes, where uncertainty in volume can lead to significant variations in products [5], as well as in the energy efficiency of a process. Energy efficient control of nozzle operation involves optimizing the use of compressed air to reduce unnecessary energy consumption [7,8,9]. Nozzles are used to remove flour and cool products, but they often operate continuously, even when no products are present on the line. This makes the process inefficient. In addition, improper nozzle positioning requires higher air pressure, further increasing energy use. By using sensors and adjustable airflow settings, nozzles can be activated only when needed, improving the overall energy efficiency of the process.
Conventional methods for determining volume are based on the use of containers of known volume (cups, cylinders), measuring mass and calculating density [10]. Although, simple to apply, these methods have limitations that can lead to significant deviations in results—primarily due to mechanical effects on particle distribution, material compaction, and the creation of air voids during measurement [11]. The above factors lead to volume changes during manipulation [12]. In order to avoid these effects, the development of non-contact measurement methods, which allow accurate volume determination without disturbing the material structure, is receiving increasing attention [13,14]. The accuracy of volume measurements is also of economic importance as it enables efficient raw material management, production process stability and energy efficiency control of production equipment in industries that rely on handling powder materials. With the use of these methods, there is a clear trend of transition from two-dimensional to three-dimensional measurements [15]. This aspect was also investigated in this work, by examining 2D and then 3D volume estimation using computer vision methodology with the help of a depth camera and 3D scanner.
Three-dimensional measurement [16] is the process of collecting three-dimensional data about the surface of an object to create a digital model of it. This technology allows for the precise recording of the shape, dimensions, and surface contours of an object [17]. Obtaining accurate data about a desired object is of great importance in industrial applications such as quality control and reverse engineering [18,19,20,21].
The 3D measurement process is also used in areas such as architecture and construction [22,23], cultural heritage [24,25], medicine (dentistry, osteology, otolaryngology) [26,27,28,29,30], industry 4.0 (production automation) [31,32,33,34,35], robotics and autonomous systems [36,37,38,39,40], energy (power line construction) [41], augmented and virtual reality, and digital twin applications [42,43], art and design [44], etc.
Three-dimensional measurement technology relies on various physical principles that can be divided into contact and non-contact measurement methods. Contact methods require the contact of a mechanical probe with the object whose 3D model is being determined. Depending on the hardness of the material, they can lead to deformation of the object during the measurement process. Consequently, they are not suitable for reconstructing the shape and volume of powder material. Non-contact measurement methods can be active or passive, depending on whether they emit some form of radiation or light towards the object [45]. Various 3D non-contact measurement technologies are based on different physical principles. Structured light measures the deformation of a projected light pattern on the surface, where the scanner’s camera captures frame-by-frame reflections to determine the object’s shape [46]. Laser pulse (Time-of-Flight) techniques reconstruct the object by measuring the time required for a laser beam to travel to the surface and back to the sensor [47]. Laser triangulation relies on projecting a laser beam and analyzing the reflection angle and deformation to calculate the distance of surface points from the scanner [48]. Finally, photogrammetry, or “3D scanning from photographs,” reconstructs the object from multiple 2D images taken from different angles using computer vision and computational geometry algorithms [49]. Only photogrammetry is passive, while the other three methods are active. For the experimental tests in Section 3, two types of active measurement devices are used. The first measurement device used is a depth camera that operates on the principle of a laser beam in combination with stereovision [50], and the second is a scanner that operates on the principle of structured light [51].
As previously mentioned, the idea behind this paper is that the shortcomings of traditional methods can be overcome by applying a non-contact 3D method for volume estimation. This method eliminates physical contact with the object and thus reduces the possibility of deformation or change in the volume of powder material. In order for such a method to be reliably applied in practice, it is necessary to verify it. The verification was performed by comparing the obtained results (volume ratios) with previously known mass ratios of scanned bulk powder, as explained in detail in Section 3.7. This verification method has been employed in applications such as blowing off excess powder material (for example, when removing excess sugar from the surface of a cake, or layers of unwanted powder in additive manufacturing, surface cleaning, etc.) when it is important to know the percentage of blown or remaining powder [52]. The aim of this work is the experimental verification of such a non-contact method using two different types of devices: a depth camera, and a 3D scanner.
This paper compares the accuracy and repeatability, as well as the potential and limitations, of the two types of devices for non-contact volume estimation of a bulk powdered material. Measurements were performed using T-850 flour. Before 3D scanning, 2D measurements were performed, which served as motivation for using 3D scanning technologies.
The paper is organized in the following manner: Section 2 shows volume estimation using 2D measurements as well as disadvantages of this method. In Section 3, a detailed experimental setup description using 3D technologies is presented. In the continuation of the paper (Section 4), the most important results are shown in the form of the tables and graphs. A detailed discussion of the limitations, advantages and disadvantages of the proposed methods is presented in Section 4. The most important conclusions are drawn in Section 6. Finally, the program code for system control (rotary table) as well as the pseudocode for custom volume estimation are shown in Appendix A and Appendix B, respectively.

2. Volume Estimation Using 2D Measurements

Initial measurements of volume estimation were performed using 2D technology on the developed device for automated quantification of the effectiveness of pneumatic nozzles (Figure 1). To estimate volume, an image of the powder on the flat surface of the bowl was taken [53].
The bowl is black, while the powder is white. The powder material was poured into the bowl and aligned with the top of the bowl by dragging it. The height of the powder corresponds to the height of the bowl. In this way, the volume of the powder material before blow-out is equal to the volume of the cylinder, which represents the inner space of the bowl. This volume can be calculated mathematically, and therefore, is known. The powder material was blown out using compressed air released through nozzles. Images of the powder distribution after blowing were obtained using a mobile phone camera. The resulting image was binarized using the developed image processing application [53], and then the ratio of black to white areas in the processed image was calculated. The percentage of black is the percentage of blow-out powder.
The blowing was performed from heights of 100 mm, 200 mm and 300 mm, while the height of the powder level in the bowl was 1 mm, 1.5 mm and 2 mm. The variation in the pressure values obtained during blow-out was 2 bar, 4 bar and 6 bar (Table 1). To allow easier understanding, one set of results is shown in Figure 2. The marks 1, 2 and 3 refer to the blow-out results at pressures of 6 bar, 4 bar and 2 bar, respectively. The corresponding blow-out percentages are 48%, 31% and 9%, respectively. As the nozzle is positioned vertically upwards above the powder bowl, the powder is blown out mostly in the central part (black color). The remaining powder material is distributed around the edges (white color).
During these measurements, certain simplifications were made. Due to the small heights of the inner space of the bowl (1 mm, 1.5 mm and 2 mm), the problem of bulk powder material is considered as a planar problem. For this purpose, the upper surface is also aligned with the top of the bowl, in order to obtain a flat surface when the view is from above. Regardless of the small height of the initial powder level, after blowing out in individual parts, the powder material can take any position at a height of 0 mm to 2 mm. This again means that a three-dimensional structure is obtained. Taking this into account, this method cannot be very precise. To compensate for the shortcomings of the 2D measurement method, a different approach is necessary. The possibility of 3D volume quantification and estimation was analyzed for arbitrary (non-uniform) arrangements of powder material, and this is proposed in the continuation of this work.

3. Experimental Setup

In order to determine the volume ratio of powdered material before and after an industrial process, such as blow-out, experimental studies were carried out with a depth camera and a 3D scanner. For this purpose, a rotary table was developed as a moving part, while the 3D measurement devices were in fixed positions. Wheat flour T-850 was used as a representative powdered material.

3.1. Rotary Table

The flour was located on a rotating platform driven by a stepper motor. The control electronics for platform rotation were developed, and all the components are given in Figure 3. The image shows a stepper motor (Figure 3, Position 7) control system wiring diagram in which a power supply (Mean Well DR-30-24 30 W/24 V/0–1.5 A, Mean Well Enterprises Co., Ltd., New Taipei City, Taiwan; Figure 3, Position 1) provides the main 24 VDC power, routed through an ON-OFF normally open switch (up to 250 V; Figure 3, Position 2) for system control. The 24 VDC line feeds both a TB6600 driver, Joy-IT, Neukirchen-Vluyn, Germany (Figure 3, Position 6) for the stepper motor (4.0 A) and an L7805CV, STMicroelectronics, Geneva, Switzerland, voltage regulator (Figure 3, Position 4), which steps the voltage down to 5 VDC for the microcontroller board (Arduino UNO; Arduino AG, Qualcomm, San Diego, California, USA; Figure 3, Position 3) and a B50K, Taiwan Alpha Electronic Co., Ltd., Taoyuan City, Taiwan, potentiometer (Figure 3, Position 5). The potentiometer sends an adjustable analog signal to the Arduino microcontroller, which processes the input and outputs step and direction control signals to the TB6600 driver, and that driver powers and controls a NEMA 17, Changzhou Jkongmotor Co., Ltd., Changzou, China, stepper motor, completing the flow from power source to regulation, control logic, user input, and final mechanical actuation. The platform rotation time can be adjusted manually, using a potentiometer or using software.
The wiring diagram for rotary table control was created and simulated in the Cirkit Designer online application, while the program code was created in the free open-source Arduino environment, Arduino IDE. The program code for the system control (rotary table) is shown in Appendix A of this paper.

3.2. Depth Camera

The first experimental setup used the Intel RealSenseDepth Camera D435 [54]. Camera calibration [55] was performed within the manufacturer’s software environment, Intel RealSense Viewer v2.55.1 (Figure 4, Position 3). Three types of device calibration were carried out: on-chip calibration (Figure 4, Position 1B), tare calibration (Figure 4, Position 2B), and tare called focal length calibration (Figure 4, Position 2B).
The first method focuses on the camera’s ability to detect objects and display their position with a low noise level. In this way, repeatability is improved, meaning the relative measurement error is reduced. This is the on-chip calibration (Figure 4, Position 1B). For this purpose, a textured target was printed on A4 paper (Figure 4, Position 1A). A wall or blinds containing textures can also be used for calibration. The exact nature of the texture is not crucial, as long as the texture distribution is random and there is a significant amount of “noise” on the surface with high spatial frequencies.
The textured target should appear in the center of the camera’s field of view. The camera should be moved away from the textured paper until at least 35% of the camera’s field of view is covered by the texture. After the calibration routine is started and successfully completed, a health-check result is displayed along with the option to choose between old and new calibration parameters. The health-check number is the normalized “Calibration Error,” where an absolute value lower than 0.25 is acceptable, although values closer to 0 are preferred. If, instead of a health-check number, an error message appears in a red field, it is necessary to correct the position and/or orientation of the camera and/or the textured target and repeat the procedure. The achieved health-check number was 0.07.
The next type of calibration is used to improve the accuracy of depth measurement, that is, determining the exact distance. This is tare calibration with a ground-truth target (Figure 4, Position 2B). The target was again printed on A4 paper (Figure 4, Position 2A), ensuring that the print scale was maintained at 100%. The target contains four Gaussian blur markers that form the vertices of a rectangle. The background of the target is a low-contrast texture. The diameter of the markers is 30 mm, while the sides of the rectangle they form are 175 mm and 100 mm. After printing, these distances were verified.
The camera was positioned perpendicular to the plane of the target at an exact distance of 1000 mm. It is important that the printed target is located within the orange rectangle in the center of the camera display. The goal is to align the sides of the orange rectangle in the software interface as closely as possible with the rectangle on the printed target. A slight gap between the corresponding sides is recommended, while the markers remain inside the orange rectangle.
After this setup, the distance between the camera and the target (1000 mm) is entered into the Ground Truth (mm) field in the software. Confirming this distance prompts the user to enter the side lengths of the rectangle on the target (175 mm and 100 mm). After confirming the side lengths, the camera’s measured distance is displayed as a result, after which calibration is performed. If proper alignment of the rectangles was not achieved, an error message with a red Retry field will appear, in which case it is necessary to re-center the printed target so that the rectangles approximately match, and then repeat the calibration.
Ideally, the left and right cameras of the stereo pair should have identical focal lengths. However, if differences exist between them, they should be minimized. To correct the left–right camera focal length imbalance, a third type of calibration called focal length calibration is performed (Figure 4, Position 2B). The same target (Figure 4, Position 2A) used for tare calibration is used for this process. First, the side lengths of the rectangle on the target are entered. After calibration is completed (Figure 4, Position 3), as with the previous two types, the obtained values are displayed.
For each calibration type, if the Apply New option is selected, a confirmation of successful calibration appears with the message “Calibration Complete,” indicating that the calibration parameters have been successfully updated.
The measurements were started after device calibration. During the experiment, the camera location was fixed at a height of 450 mm and a distance of 370 mm from the center of the rotating platform. The camera tilt relative to the horizontal position was 35°. The experimental setup with the depth camera is given in Figure 5.
The exact position and tilt angle of the depth camera was determined by monitoring the camera view within the RecFusion Pro software 2.3.0 [56]. RecFusion Pro is a commercially available, industrial 3D reconstruction software. RecFusion supports a wide variety of RGB-D sensors, such as the Intel RealSense, RealSense, Inc., Cupertino, California, USA, Microsoft Azure Kinect, Microsoft, Redmond, Washington, USA, etc. It is possible to use their SDK for any sensor which provides depth and calibration information.
In this case, the height of the depth camera relative to the bulk flour was determined so that the flour was within the software’s field of view. The tilt angle was determined so that the flour stack occupied an approximately central position in the display. The green color of the flour stack suggests its successful detection and stable scanning.
The final position of the camera mount, which determined its distance, was set according to the depth map in the software. When using the depth camera, each measurement lasted exactly 50 s, which was preset in the RecFusion software. Important settings for the scanning process that were made within the RecFusion software are the following: depth format, 1280 × 720 30 fps [57]; color format, 1280 × 720 (RGB24) 30 fps; IR exposure, 201 μs (0.201 ms) [58]; IR gain: 16 [59]; and volume resolution 640 voxels (voxel resolution 0.8 mm).

3.3. Structured Light 3D Scanner

For the second experimental setup, a Shining 3D EinScan PRO 2X, SHINING 3D, Zhejiang, China, 3D scanner was used. Scanner calibration [60] (Figure 6) was performed within the manufacturer’s software environment, Shining EXScan Pro 2X (Figure 6, Position 1C), which is freely available [61]. It has comprehensive scanning capabilities, advanced post-processing options, and precise measurement tools for exporting digital 3D models.
Calibration is carried out using a calibration plate supplied by the manufacturer (Figure 6, Position 1A), which contains a serial number. The calibration plate is placed on a flat, horizontal surface with the black side and white circles (markers) facing upward (toward the scanner). The scanner projects a cross-shaped pattern that should be positioned inside the white square at the center of the calibration plate.
Holding the scanner in hand (Figure 6, Position 1B), calibration is started by pressing the Play button. During this process, the scanner is held vertically and perpendicular to the calibration plate. Moving the scanner vertically upward changes the distance between the projector and the plate. Within the software interface, fields with corresponding colors turn green as confirmation that the required height has been reached. When all fields turn green, this indicates the completion of this phase.
The next phase involves placing the calibration plate into a holder provided by the manufacturer, so that the plate rests at a specific angle relative to the flat horizontal surface. This angle is defined by the factory-made holder. The previous vertical movement procedure is then repeated four times with the plate in this position, with the holder and plate rotated by 90° each time relative to the previous position. In this way, the calibration plate makes one full rotation.
The scanner can be moved either from an upper position downward or from a lower position upward. If one of the height positions is missed due to moving too quickly up or down, the scanner can be moved vertically again afterward until that position is reached.
After that, the calibration parameters of the scanner are determined within the software (Figure 6, Position 2). The result is given as a deviation in pixels. To obtain the deviation in millimeters, instead of selecting the Calibration tab, the Accuracy tab must be selected. The calibration plate is again placed flat on a horizontal surface, and the scanner is manually moved up and down once more to reach each of the specified heights. The result is displayed as a deviation in millimeters. The obtained deviation was 0.017546 mm (Figure 6, Position 3).
The measurements were started after device calibration. During the experiment, the scanner location was fixed at a height of 350 mm and at a distance of 350 mm from the center plane of the rotating platform. The angle of the scanner relative to the horizontal position was the same as that of the depth camera, i.e., 35°. The experimental setup for the scanner measurement is presented in Figure 4.
Similar to the depth camera, the 3D scanner position and orientation were performed using the software view, with only different software (EXScan Pro) being used. Initially, the same scanner position parameters were set relative to the experimental setup with the depth camera. Differences in the scanner setup in comparison to the depth camera occurred as a result of monitoring the software view.
The final position of the scanner mount, which determined its distance, was adjusted according to the depth map in the software. Unlike the experimental setup with the depth camera, a sheet (Figure 7b, Position 2) was added under the flour substrate. The reason for this is to add markers (Figure 7a) that the scanner will recognize, enabling better orientation. Markers were glued to the upper sheet surface so that the flour substrate did not obscure them. The markers were positioned as randomly as possible. The spatial orientation of the scanner when scanning flour on a rotating platform was improved by marker recognition. The scanner thus focused better on the scanning object (bulk flour) and its surface parts that were not yet reconstructed.
In order to find a compromise between the scanning process time and scanning accuracy, the Handheld Rapid Scan mode of the scanner was selected. Significant technical specifications of the EinScan PRO 2X scanner in this mode are given in [62].
The selection of the scanning mode, as well as starting, monitoring and stopping the process, was performed within the EXScan Pro software. By selecting the scanning mode, the software parameters of the selected operating mode were automatically set. Upon completion of the scan, a file of the digital network was created.
In both cases, test measurements were performed to verify that the display within the software environment was stable. During these measurements, fine-tuning of the final position and orientation of the depth camera and 3D scanner was performed.

3.4. Devices Technical Comparison

To properly test and compare the Intel RealSense Depth Camera D435 with the Shining 3D EinScan Pro 2X (2020), the comparison was broken into clear, measurable categories and tests. These two devices are very different (Table 2); the D435 is a depth-sensing camera (RGB + active stereo depth) aimed at robots, perception, tracking, AR/3D scanning, etc., while the EinScan Pro 2X is a handheld structured-light/scanning system designed primarily for high-accuracy 3D digitizing.
Because they serve different roles, the tests differ, but the same principles are applied: real performance in key metrics is measured under the same conditions and then compared. Not all parameters could be included in the comparison, so for the purpose of this paper, repeatability and accuracy were chosen as the main parameters.
The following subsections provide detailed measurement procedures for the depth camera and 3D scanner devices.

3.5. Single Measurement Procedure

Measurement repeatability and accuracy were evaluated for both devices. For repeatability testing, five measurements were conducted using the same bulk flour mass of 500 g. Accuracy was assessed using smaller sample masses. To ensure consistent conditions, the bulk flour shape was kept identical for each device. Before each test, the sample mass was verified with a scale accurate to 1 g. The measured sample masses are presented in Table 3.
Before performing the 3D measurement, the depth camera or 3D scanner is connected to a computer running the appropriate software, which displays both the scanning process and the results. For both devices, the software generates a depth map that allows visual verification of scan stability. The measurement begins only after the depth map becomes stable.
Regardless of the computer vision technology used, the initial measurement output is a point cloud. These data were then merged and processed by the software to create a digital mesh. The final measurement result is saved as a .ply file.
The complete workflow for a single measurement for both devices is illustrated in the diagram shown in Figure 8.

3.6. Computer Vision Method for Volume Estimation

In this research, to determine the volume of the scanned surface, a custom-built algorithm (the pseudocode is presented in Appendix B) was used to circumvent limitations in the scanning process. As a scanned surface is extremely homogeneous, standard algorithms [63] for scanning and surface generation from a point cloud, such as structure from motion, stereophotogrammetry and short-range scanning, will give noisy calculations. This results in a surface with a lot of small bumps on surfaces that are planar in reality.
While the resulting surface has some limitations, it still has some properties which allow faster determination of volume. One of them is that the real-life axis of a scanned object and the axis of the 3D model are aligned. For this example, this means that the flat area on which material is positioned can be represented by large number of almost planar triangles which have very small variation on a single axis. By application of a histogram of vertex positions and iteratively reducing the range on which the histogram is observed, in just a few iterations, the height at which the material model starts can be determined, as shown in Figure 9. As this method will give a mean height value of vertices that represent a “table” surface (as mentioned, bumps will be part of the model), to reduce noise, a factor threshold is used which is calculated by a histogram with the addition of 1% of the span of the values of that model for that dimension. From these, only the polygons that are positioned higher than the plane are used, which determines the material model.
After determining the threshold and which polygons are over the limit, it was necessary to identify which of them were part of the scanned mass. While thresholding will reduce number of noise polygons appearing, some objects will still be obtained that are the result of errors in the scanning process. Thus, machine learning clustering algorithms are applied to determine node clusters of vertices. The dimension used for height is eliminated and the cluster with the largest number of nodes is determined. For this purpose, some more advanced clustering methods should be used as some such as KMeans have shown very poor results. In this experimental test, the DBScan algorithm showed good results. After determining the largest cluster (in the achieved results, the largest cluster was several orders of magnitude bigger than second largest), only the polygons consisting of these nodes were retained.
To determine the volume of the model, the center of mass was determined. This is achieved by calculating the mean value of all vertex positions that are included in the material model. After that, the value on the axis that determines height is reduced to the same value as used for the surface plane. From this point, tetrahedrons are formed for each polygon of the material model. By determining the sum of volumes of these tetrahedrons, the approximate volume of the model can be calculated (as shown in Figure 10). Here, it should be noted that the model’s volume is given in abstract units as there is no reference scale on the scanned model. This does not affect the results, as the camera model and position have not changed, so all scanned results should be of same proportions in the model space.

3.7. Practical Determination of Reference Flour Volume

For the purpose of transformation into a self-explanatory unit and for comparison purposes, an experimental determination of the volume was performed [10]. A cup with a height of 88 mm and diameter of 72 mm was used. The mass of the cup filled to the top with flour, the mass of the empty cup, and the mass of the bulk flour from the cup were measured. The obtained values were 516 g, 327 g and 188 g, respectively, as shown in Figure 11. The results are in line with expectations because the measurement error is 1 g.
Therefore, it can be concluded that the mass of the material is 188 g. Since the shape of the cup is identical to the shape of the cylinder, the volume of the cylinder is calculated and equated it to the mass of the bulk material. The equation for the volume determination of a cylinder is
V = π r 2 h ,
where V is volume, π = 3.14 , r —radius of the base and h is the cylinder height. The obtained value is 358,110.72 m m 3 . By calculation, using the proportion, for 500 g, the volume value obtained is 952,422.13 m m 3 . This value is used as a reference value in the comparison method.
In this way, the approximate value of a volume of 500 g in m m 3 is obtained, which corresponds to the abstract values obtained after the creation of the 3D model. The obtained value in abstract dimensions corresponds to m m 3 .

4. Results

4.1. Depth Camera

Regarding the repeatability of volume measurements using a depth camera and subsequent software processing, the obtained results are given in Table 4. Measurements were performed with the identical bulk flour mass shown in Table 3.
The mean value of the obtained volumes is 927,953.44. The maximum relative error is 1.65%.
After software processing, the results of depth camera measurements for different mass values are given in Table 5. The used mass values are from Table 3.
According to the previously described method for measurements with a depth camera, volume measurements were also made with a 3D scanner.

4.2. 3D Scanner

Regarding the repeatability of volume measurements with the scanner and subsequent software processing, the obtained results are given in Table 6. For the comparison, measurements were performed with an identical bulk flour mass, as with the camera measurements. The mass of the flour was 500 g.
Each measurement using the 3D scanner lasted approximately 50 s as the measurement was stopped manually when it was determined that the current scan had reached saturation, in terms of the number of points detected by the scanner. Due to the nature of the scan, the geometry of the bulk flour itself also caused the measurements to oscillate around 50 s.
The mean value of the obtained volumes is 971,938.3732. The maximum relative error is 1.552%.
After software processing, the results of 3D scanner measurements for different mass values are given in Table 7. The used mass values are from Table 3.

5. Discussion

The starting points for this research are the known masses of bulk flour. More precisely, the mass ratios are known and are compared with the obtained corresponding volume ratios. The previously known masses were obtained by measuring the bulk flour using a weighing scale. This procedure of mass measurment depends a lot on the method of pouring the flour, as well as the accuracy of the scale. For volume estimation this is the input phase, which serves as a reference value. The error is expressed as a percentage.

5.1. Repeatability Measurement

The repeatability of both 3D measurement cases was assessed using measurements of an identical pile of bulk flour. The mean volume value of measurements on a mass of 500 g is higher when measured using a scanner (971,938.3732 mm3 vs. 927,953.4395 mm3).
The analysis showed similar performance for both systems. The maximum relative error was 1.649% for the depth camera and 1.552% for the scanner. Although the depth camera achieved the lowest absolute deviation (544.08 mm3), its error distribution was wider, ranging up to 15,208 mm3. Conversely, the scanner measurements exhibited a narrower error range and more uniform distribution, suggesting superior measurement stability.
These results indicate that the 3D scanner provides higher repeatability due to its consistent performance, making it more suitable for applications requiring reliable volume estimation. The depth camera, while more variable, demonstrated potential for high-precision measurements under optimal conditions. Future work should investigate algorithm optimization and sensor calibration to reduce variability in depth camera measurements.

5.2. Accuracy Measurement

The comparative mass-based volume measurements obtained using the depth camera and the 3D scanner reveal clear differences in accuracy and reliability between the two systems. As shown by the error trends, the depth camera demonstrates significantly lower relative errors across all tested mass levels compared to the scanner, Figure 12.
For the depth camera, relative errors ranged from 0% to 7.2%. A noticeable improvement in accuracy is observed as the flour mass increases, with the minimum error occurring at 400 g, where the error reaches zero. This indicates strong system calibration and reliable performance at higher volumes. Although some fluctuations are present at lower masses, the overall error values remain low, confirming the suitability of the depth camera for precise volume estimation.
In contrast, the 3D scanner exhibits consistently high relative errors, ranging from 15.6% to 20.4%, regardless of the flour mass. Unlike the depth camera, no decreasing trend is observed with increasing mass. This behavior suggests a systematic overestimation of volume by the scanner, likely due to reconstruction inaccuracies or calibration limitations. The persistence of large errors across all measurements indicates that the 3D scanner lacks robustness for quantitative volume analysis under the tested conditions.
The graphical representation further supports these findings, clearly illustrating the superior performance of the depth camera. While the depth camera shows a stable and improving accuracy trend, the scanner maintains high error levels without significant variation, reinforcing the presence of systematic bias.
Overall, these results demonstrate that the depth camera provides more accurate and reliable volume measurements compared to the 3D scanner. Therefore, the depth camera is more suitable for applications requiring precise volume quantification, whereas the 3D scanner would require further calibration and algorithm refinement before it can be reliably used for such purposes.

5.3. Development Path and Problems

The first question that arises is whether a 3D model created using a depth camera or 3D scanner be taken into consideration. For the purpose of comparison, 3D models must meet some basic eligibility requirements. The prerequisites for forming a model in an appropriate way so that it can be taken into consideration are as follows: there is a reference point, the viewing angle is always the same (this is for the camera), and the object of work rotates and not the camera.
When creating the 3D models, the following problems occurred: the models are diverse in several parameters: some of them have a base, while some do not; scales are different; there is no reference object in the scene for scaling to the same scale; and models are oriented differently in relation to the global coordinate system. All this makes it impossible to use the created algorithm for models that were obtained based on the multi-view camera system. In order to solve these problems, it was necessary to do the following: If models are based on images, they need to be formed again so that the models are oriented in accordance with one axis of the global coordinate system, which means that the base is normal to that axis. It is also necessary that all models have a base. The base would then be a reference object for determining the scale and later for scaling the model to the same scale.
After formation of the 3D model, manual refinements of the model and manual processing of the results were performed. The refinements of the model were related to filling in the gaps created due to poor external conditions such as the influence of light, rotation speed and the mode used to create the model itself. The influence of external effects was removed by using filters. This is a rather slow process, prone to frequent errors and is not repeatable. From this background, it is difficult to create a model (template) that will ensure the repeatability of the system.
In order to obtain results in the simplest and fastest way possible, the idea was to use software with built-in filters that would create a contrast between colors (specifically between white and black) with one click. This would isolate the model, which in this case, is white, and physically separate it from the background. The resulting model would represent a network of certain polygons. After that, it was necessary to fill it with material.
The next problem was determining the amount of material needed to obtain a complete 3D solid model. One approach is to input the model into verified 3D printing software and let it tell user how much material they need to make this model. In this way, the volumetric mass of the 3D model would be determined. This method proved to be better, faster and more repeatable, but the accuracy of the results was called into question.
Based on all the identified problems and requirements, software for volumetric determination of the scanned surface was developed and implemented, as described in this paper.

5.4. Impact of Volumetric Determination Software on Energy Efficiency Control of Pneumatic Nozzles—Future Work

The developed software for volumetric determination of scanned surfaces plays a crucial role in enhancing the energy efficiency of pneumatic nozzle systems. By accurately analyzing the three-dimensional geometry and volume of the scanned surface, the software provides precise information about the size, shape, and distribution of the target area. This data enables intelligent and demand-based control of pneumatic nozzles [8,64].
Regarding the use of nozzles in industrial applications, especially in the case of the food (bakery) industry, there are two possibilities that can be taken into consideration:
  • The conveyor belt with products moves while the 3D measurement device is fixed—dynamic model;
  • The conveyor belt with products stops at the moment when it is necessary to perform a 3D measurement—static model.
The initial future research will be carried out based on the static model (the basics of which were developed in this work), and after that, the dynamic case will be analyzed. On that occasion, the EinScan Pro 2X scanner will be used because, during testing, it showed higher repeatability, and in static mode, it results in a more accurate 3D model. On the other hand, for the dynamic model, Depth D435 will be used because, during testing, at minimum rotation speeds of the substrate, it showed higher accuracy, enabling faster adaptive modeling and creation of the necessary data.
The main objective of future research is to enable smart control of the nozzles in order to increase the energy efficiency of the compressed air system based on the estimated volume of powder materials.
Instead of operating the nozzles continuously or at fixed parameters, the control system can dynamically adjust airflow, pressure, and activation time based on the actual volume and surface characteristics. For example, larger volumes may require higher airflow or longer nozzle activation, while smaller or thinner regions can be treated with reduced air consumption. This adaptive control prevents unnecessary compressed air usage and significantly reduces energy waste.
The software also enables spatial targeting of pneumatic jets. By mapping the scanned surface, the system can activate only the nozzles required for specific regions, rather than the entire nozzle array. This selective operation further improves energy efficiency by eliminating idle airflow in unused zones.
Moreover, real-time feedback from the volumetric analysis allows continuous optimization of nozzle parameters. The system can learn from previous operations and refine control strategies to achieve the desired process outcome with minimum energy input. This closed-loop control approach ensures consistent performance while minimizing compressor load.
Integration of volumetric data with industrial automation systems also supports predictive maintenance. Irregular surface patterns or deviations detected by the software can indicate nozzle misalignment or clogging, preventing inefficient operation and excessive air consumption.
In summary, the developed volumetric determination software directly contributes to energy-efficient pneumatic nozzle control by enabling
  • Demand-based airflow regulation;
  • Selective nozzle activation;
  • Optimized pressure settings;
  • Closed-loop adaptive control;
  • Reduced compressed air consumption.
This intelligent integration leads to lower energy usage, reduced operational costs, and improved sustainability of pneumatic systems.

6. Conclusions

This paper presents a method for volume estimation of powder materials using computer vision technologies. The process of obtaining a 3D model of the observed object by 3D scanning is widely used and researched, and is suitable as an integral element of the presented method. Although the influence of errors in measuring the mass of flour, as well as scale errors, cannot be ignored, the results show the possibility of applying this method to the volume estimation of powder materials. Although this method also consists of a preparatory phase that affects errors, its main part actually takes place from the moment when the mass of powder material is poured onto the surface. Depending on the industrial process, the initial mass is known, and the remaining mass after blowing must be estimated to determine when to stop the nozzles. This enables energy-efficient control by preventing unnecessary air use. Using the equipment of industrial factories together with modern scanners and cameras, the application of the proposed method for volume estimation is possible.
Successful scanning is influenced by factors such as the optical characteristics of the powder material, ambient lighting, camera geometry and position, set parameters and device calibration, post-processing, etc. The scanner provides more consistent and stable measurements, making it more reliable for repeatability. The depth camera can achieve very high precision in some cases, but its results show greater variability.
Both types of devices have shown high sensitivity to powder materials with a reflective surface (such as salt), to the extent that discontinuities, or cracks, are observed on the resulting 3D model. The reliable reconstruction of the resulting volume is also significantly affected by the performance of the computer on which post-processing is performed. By using identical post-processing software on the same computer, identical conditions were set for both devices. Nevertheless, the disadvantage of the method is its dependence on the software approximation of the bottom surface of the bulk flour which rests on the rotating platform plate.
Given the observed advantages and disadvantages, this method can be used for volume estimation of powder materials that, due to the nature of 3D measurement, do not reflect light and are preferably not white in color. Otherwise, additional software processing of the resulting voids in the model must be performed, which introduces a certain error in the final result. In order to obtain valid results in the volume estimation of powder material, it is necessary to perform measurements with appropriate equipment according to the procedure defined in this paper.

Author Contributions

Conceptualization, J.Š. and V.R.; methodology, J.Š.; software, L.K. and B.B.; validation, V.J., Ž.S. and J.Š.; formal analysis, V.R. and L.K.; investigation, V.J., Ž.S. and B.B.; resources, J.Š.; data curation, J.Š, V.R., L.K. and B.B.; writing—original draft preparation, J.Š.; writing—review and editing, J.Š. and V.R.; visualization, J.Š.; supervision, V.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science Fund of the Republic of Serbia, project “Multimodal multilingual human-machine speech communication—AI SPEAK”, grant no. 7449; by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia (contract no. 451-03-137/2025-03/200156).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article material, and further inquiries can be directed to the corresponding author.

Acknowledgments

During the preparation of this manuscript/study, the authors used ChatGPT based on the GPT-4.1 architecture for the purposes of generating text and graphics. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The program code for system control (rotary table)
byte directionPin = 2; // Direction pin, for driver
byte stepPin = 3; // Step pin (PWM) for driver
int numberOfSteps = 16000; // Number of steps per revolution, stepper motor initially has 200 pulses, but microstep 1/8 is used
byte ledPin = 13; // arduino controler integrated led for visual indication
int pulseWidthMicros = 20; // time in microseconds for defining preparing for each microstep
int millisbetweenSteps = 250; // time in milliseconds that defines time needed for one rotation, value entered in code where one rotation lasts 50 s
int start = 12; // button for simulation purpose, starting simulation
const int potPin = A0;
int speedDelay = 1000; // Initial speed delay in manual mode using potenciometer
boolean taster = false; // auxiliary flag for simulation purpose
void setup() {
  Serial.begin(9600);
  Serial.println("Starting StepperTest");
  digitalWrite(ledPin, LOW);
  pinMode(directionPin, OUTPUT);
  pinMode(stepPin, OUTPUT);
  pinMode(ledPin, OUTPUT);
  pinMode(start, INPUT_PULLUP); // used internal pull-up
}
void loop() {
    //statostart=digitalRead(12);
    if(digitalRead(start) == LOW){
  taster = true;
    }
    if(!taster){//negation of a variable->!taster for 50 seconds value entered in code, variable affirmation-> taster for value read from potenciometer
    int potValue = analogRead(potPin); // Read potentiometer value
    speedDelay = map(potValue, 0, 1023, 10000, 3000); // Map potentiometer value to speed delay
    digitalWrite(stepPin, HIGH);
    delayMicroseconds(speedDelay);
    digitalWrite(stepPin, LOW);
    delayMicroseconds(speedDelay);
  }
  else{// One rotation is 50 s, value entered in code
    digitalWrite(directionPin, HIGH);
    for(int n = 0; n < numberOfSteps; n++) {
     digitalWrite(stepPin, HIGH);
     delayMicroseconds(pulseWidthMicros);
     digitalWrite(stepPin, LOW);
     delay(millisbetweenSteps);
     digitalWrite(ledPin, !digitalRead(ledPin));
   }
  }
 }

Appendix B

The pseudocode for custom volume estimation
function findGroupingValue(values,numberOfIterations):
   numOfBins=10
   for i in (1,numberOfIterations):
     s=min(values)
     e=max(values)
     binSize=(e-s)/numOfBins
     bins=prepareEmptyBins(numOfBins)
     for value in values:
        position=floor((val-s)/binSize)
        bins[position].add(val)
     binSizes=calculateBinSizes(bins)
     fullestBinPosition=positionOfMax(binSizes)
     values=bins[fullestBinPosition-1]+bins[fullestBinPosition]+bins[fullestBinPosition+1]
 return mean(values)

function calculateTetrahedronVolume(pointA,pointB,pointC,pointD):
   vec1=vector(pointA,pointB)
   vec2=vector(pointB,pointC)
   vec3=vector(pointC,pointD)
   matrix=[vec1,vec2,vec3]
   return abs(determinant(matrix))/6

function filterVertices(vertices):
   inputs=extractXY(vertice)
   clustering=DBSCAN(eps=1.5)
   mostCommonClas=clustering
   filteredVertices=vertices in mostCommonClass
   return filteredVertices

function calculateVolume(polygons,numberOfIterations,offsetFactor):
   //polygons—scanned polygons of surface
   //numberOfIterations—controls precision of grouping algorithm calculation. Higher value takes more time but gives better results
   //offsetFactor—offsets thresholding by certain amount to remove noise. In experiments, value 0.02 was used
   vertices=extractVertices(polygons)
   low=min(vertices.z)
   high=max(vertices.z)
   range=high-low

   groupingValue=findGroupingValue(vertices.z,numberOfIterations)
   threshold=groupingValue+offsetFactor*range
   upperVertices=v in vertices if v.z>threshold
   filteredVertices=filterVertices(upperVertices)
   center=mean(filteredVertices)

   filteredPolygons=p in polygons if (vertex in p) in filteredVertices

   volume=0
   for p in filteredPolygons:
     volume=volume+ calculateTetrahedronVolume(p[0],p[1],p[2],center)

   return volume

References

  1. Dobrzański, L.A.; Dobrzański, L.B.; Dobrzańska-Danikiewicz, A.D. Overview of conventional technologies using the powders of metals, their alloys and ceramics in Industry 4.0 stage. J. Achiev. Mater. Manuf. Eng. 2020, 98, 56–85. [Google Scholar] [CrossRef]
  2. Dobson, S.D.; Starr, T.L. Powder characterization and part density for powder bed fusion of 17-4 PH stainless steel. Rapid Prototyp. J. 2021, 27, 53–58. [Google Scholar]
  3. Erkinov, A.; Xadjibayev, A. Use of rotor classifiers in the powder separation process in the food industry. Int. J. Artif. Intell. 2025, 1, 458–460. [Google Scholar]
  4. Sun, X.; Chen, M.; Liu, T.; Zhang, K.; Wei, H.; Zhu, Z.; Liao, W. Characterization, preparation, and reuse of metallic powders for laser powder bed fusion: A review. Int. J. Extrem. Manuf. 2023, 6, 012003. [Google Scholar]
  5. Suhag, R.; Kellil, A.; Razem, M. Factors Influencing Food Powder Flowability. Powders 2024, 3, 65–76. [Google Scholar] [CrossRef]
  6. Kabekkodu, S.N.; Dosen, A.; Blanton, T.N. PDF-5+: A comprehensive Powder Diffraction FileTM for materials characterization. Powder Diffr. 2024, 39, 47–59. [Google Scholar]
  7. Wang, P.; Yang, W. Pneumatic rotary nozzle structure optimization design and airflow characteristics analysis. Adv. Mech. Eng. 2023, 15, 16878132231195016. [Google Scholar] [CrossRef]
  8. Šešlija, D.; Ignjatović, I.; Dudić, S. Increasing the energy efficiency in compressed air systems. In Energy Efficiency—The Innovative Ways for Smart Energy, the Future towards Modern Utilities; IntechOpen: Rijeka, Croatia, 2012; pp. 151–174. [Google Scholar]
  9. Jiang, Z.; Wei, S.; Wang, F. Experimental and CFD Study of Parameters Affecting Glue Spray Atomization. Fluids 2025, 10, 250. [Google Scholar] [CrossRef]
  10. Akseli, I.; Hilden, J.; Katz, J.M.; Kelly, R.C.; Kramer, T.T.; Mao, C.; Osei-Yeboah, F.; Strong, J.C. Reproducibility of the measurement of bulk/tapped density of pharmaceutical powders between pharmaceutical laboratories. J. Pharm. Sci. 2019, 108, 1081–1084. [Google Scholar] [CrossRef]
  11. Felber, C.; Azouma, Y.O.; Reppich, M. Evaluation of analytical methods for the determination of the physicochemical properties of fermented, granulated, and roasted cassava pulp-gari. Food Sci. Nutr. 2017, 5, 46–53. [Google Scholar]
  12. García-Moreno, F.; Banhart, J. Influence of gas pressure and blowing agent content on the formation of aluminum alloy foam. Adv. Eng. Mater. 2021, 23, 2100242. [Google Scholar] [CrossRef]
  13. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  14. El Ghazouali, S.; Mhirit, Y.; Oukhrid, A.; Michelucci, U.; Nouira, H. FusionVision: A comprehensive approach of 3D object reconstruction and segmentation from RGB-D cameras using YOLO and fast segment anything. Sensors 2024, 24, 2889. [Google Scholar]
  15. Grimm, T.; Hantke, N.; Iusupova, A.; Sehrt, J.T. Surface analysis in additive manufacturing: A systematic literature review regarding powder bed fusion processes. Surf. Topogr. Metrol. Prop. 2025, 13, 013002. [Google Scholar] [CrossRef]
  16. Vodilka, A.; Kočiško, M.; Pollák, M.; Kaščak, J.; Török, J. Design of 3D Scanning Technology Using a Method with No External Reference Elements and Without Repositioning of the Device Relative to the Object. Appl. Sci. 2025, 15, 4533. [Google Scholar] [CrossRef]
  17. Kantaros, A.; Ganetsos, T.; Petrescu, F.I.T. Three-dimensional printing and 3D scanning: Emerging technologies exhibiting high potential in the field of cultural heritage. Appl. Sci. 2023, 13, 4777. [Google Scholar] [CrossRef]
  18. Haleem, A.; Javaid, M.; Singh, R.P. Exploring the potential of 3D scanning in Industry 4.0: An overview. Int. J. Cogn. Comput. Eng. 2022, 3, 161–171. [Google Scholar] [CrossRef]
  19. Montalti, A.; Ferretti, P.; Santi, G.M. A Cost-Effective Approach for Quality Control in Material Extrusion 3D Printing via 3D Scanning. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4825393 (accessed on 6 February 2026).
  20. Klimecka-Tatar, D.; Krynke, M. Reverse engineering tools—3D scanning—As support for precise quality control in automated special processes. Procedia Comput. Sci. 2025, 253, 1933–1942. [Google Scholar]
  21. Raza, S.F.; Amjad, M.; Ishfaq, K.; Ahmad, S.; Abdollahian, M. Effect of three-dimensional (3D) scanning factors on minimizing the scanning errors using a white LED light 3D scanner. Appl. Sci. 2023, 13, 3303. [Google Scholar] [CrossRef]
  22. Wang, J.; Yi, T.; Liang, X.; Ueda, T. Application of 3D laser scanning technology using laser radar system to error analysis in the curtain wall construction. Remote Sens. 2023, 15, 64. [Google Scholar]
  23. Mihić, M.; Sigmund, Z.; Završki, I.; Butković, L.L. An analysis of potential uses, limitations and barriers to implementation of 3D scan data for construction management-related use—Are the industry and the technical solutions mature enough for adoption? Buildings 2023, 13, 1184. [Google Scholar] [CrossRef]
  24. Ruiz, R.; Marín Torres, M.T.; Sánchez Allegue, P. Comparative analysis between the main 3D scanning techniques: Photogrammetry, terrestrial laser scanner, and structured light scanner in religious imagery: The case of the Holy Christ of the Blood. ACM J. Comput. Cult. Herit. JOCCH 2021, 15, 1–23. [Google Scholar] [CrossRef]
  25. Gautier, Q.K.; Garrison, T.G.; Rushton, F.; Bouck, N.; Lo, E.; Tueller, P.; Schurgers, C.; Kastner, R. Low-cost 3D scanning systems for cultural heritage documentation. J. Cult. Herit. Manag. Sustain. Dev. 2020, 10, 437–455. [Google Scholar] [CrossRef]
  26. Haleem, A.; Javaid, M. 3D scanning applications in medical field: A literature-based review. Clin. Epidemiol. Glob. Health 2019, 7, 199–210. [Google Scholar]
  27. Javaid, M.; Haleem, A.; Kumar, L. Current status and applications of 3D scanning in dentistry. Clin. Epidemiol. Glob. Health 2019, 7, 228–233. [Google Scholar] [CrossRef]
  28. He, G.; Ricca, J.M.; Dai, A.Z.; Mustahsan, V.M.; Cai, Y.; Bielski, M.R.; Kao, I.; Khan, F.A. A novel bone registration method using impression molding and structured-light 3D scanning technology. J. Orthop. Res. 2022, 40, 2340–2349. [Google Scholar] [CrossRef]
  29. Tian, Y.; Chen, C.; Xu, X.; Wang, J.; Hou, X.; Li, K.; Lu, X.; Shi, H.; Lee, E.S.; Jiang, H.B. A review of 3D printing in dentistry: Technologies, affecting factors, and applications. Scanning 2021, 2021, 9950131. [Google Scholar] [CrossRef]
  30. Wersényi, G.; Scheper, V.; Spagnol, S.; Eixelberger, T.; Wittenberg, T. Cost-effective 3D scanning and printing technologies for outer ear reconstruction: Current status. Head Face Med. 2023, 19, 46. [Google Scholar] [CrossRef] [PubMed]
  31. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Industrial perspectives of 3D scanning: Features, roles and its analytical applications. Sens. Int. 2021, 2, 100114. [Google Scholar] [CrossRef]
  32. Hegedűs-Kuti, J.; Szőlősi, J.; Varga, D.; Abonyi, J.; Andó, M.; Ruppert, T. 3D scanner-based identification of welding defects—Clustering the results of point cloud alignment. Sensors 2023, 23, 2503. [Google Scholar] [PubMed]
  33. Muminović, A.J.; Gierz, Ł.; Rebihić, H.; Smajić, J.; Pervan, N.; Hadžiabdić, V.; Trobradović, M.; Warguła, Ł.; Wieczorek, B.; Łykowski, W.; et al. Enhancing furniture manufacturing with 3D scanning. Appl. Sci. 2024, 14, 4112. [Google Scholar] [CrossRef]
  34. Muminović, A.J.; Smajić, J.; Šarić, I.; Pervan, N. 3D scanning in Industry 4.0. Basic Technol. Models Implement. Ind. 2023, 4, 231–240. [Google Scholar]
  35. Jędrych, M.; Gorzkiewicz, D.; Deja, M.; Chodnicki, M. Application of 3D scanning and computer simulation techniques to assess the shape accuracy of welded components. Int. J. Adv. Manuf. Technol. 2025, 138, 127–135. [Google Scholar] [CrossRef]
  36. Haroon, A.; Lakshman, S.A.; Mundy, M.; Li, B. Autonomous robotic 3D scanning for smart factory planning. Proc. SPIE 2024, 13038, 130380G. [Google Scholar]
  37. Fernandes, D.; Silva, A.; Névoa, R.; Simões, C.; Gonzalez, D.; Guevara, M.; Novais, P.; Monteiro, J.; Melo-Pinto, P. Point-cloud based 3D object detection and classification methods for self-driving applications: A survey and taxonomy. Inf. Fusion 2021, 68, 161–191. [Google Scholar] [CrossRef]
  38. Butzhammer, L.; Müller, A.M.; Hausotte, T. Calibration of 3D scan trajectories for an industrial computed tomography setup with 6-DOF object manipulator system using a single sphere. Meas. Sci. Technol. 2022, 34, 015403. [Google Scholar] [CrossRef]
  39. da Silva Santos, K.R.; de Oliveira, W.R.; Villani, E.; Dttmann, A. 3D scanning method for robotized inspection of industrial sealed parts. Comput. Ind. 2023, 147, 103850. [Google Scholar] [CrossRef]
  40. Zong, Y.; Liang, J.; Pai, W.; Ye, M.; Ren, M.; Zhao, J.; Tang, Z.; Zhang, J. A high-efficiency and high-precision automatic 3D scanning system for industrial parts based on a scanning path planning algorithm. Opt. Lasers Eng. 2022, 158, 107176. [Google Scholar] [CrossRef]
  41. Wang, H.; Wang, Y.; Feng, Z.; Chen, C.; Zong, K. Application of 3D scanning technology in the construction of transmission line cross-spanning. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2053–2060. [Google Scholar]
  42. Rausch, C.; Lu, R.; Talebi, S.; Haas, C. Deploying 3D scanning based geometric digital twins during fabrication and assembly in offsite manufacturing. Int. J. Constr. Manag. 2021, 23, 565–578. [Google Scholar] [CrossRef]
  43. Matys, M.; Krajčovič, M.; Gabajová, G. Application of 3D scanning for the creation of 3D models suitable for immersive virtual reality. Zarządz. Przedsiębiorstwem 2023, 26, 12–18. [Google Scholar]
  44. Bugeja, A.; Bonanno, M.; Garg, L. 3D scanning in the art & design industry. Mater. Today Proc. 2022, 63, 718–725. [Google Scholar]
  45. Shahid, S.T.; Siddique, S.M.A.; Bhuiyan, M.H.K. Automatic contact-based 3D scanning using articulated robotic arm. arXiv 2024, arXiv:2411.07047. [Google Scholar] [CrossRef]
  46. Trebuňa, P.; Mizerák, M.; Rosocha, L. 3D scanning—Technology and reconstruction. Acta Simulatio 2018, 4, 1–6. [Google Scholar] [CrossRef]
  47. Bartol, K.; Bojanić, D.; Petković, T.; Pribanić, T. A review of body measurement using 3D scanning. IEEE Access 2021, 9, 67281–67301. [Google Scholar] [CrossRef]
  48. Liu, L.; Cai, H.; Tian, M.; Liu, D.; Cheng, Y.; Yin, W. Research on 3D reconstruction technology based on laser measurement. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 297. [Google Scholar] [CrossRef]
  49. Verykokou, S.; Ioannidis, C. An overview on image-based and scanner-based 3D modeling technologies. Sensors 2023, 23, 596. [Google Scholar] [CrossRef]
  50. Rustler, L.; Volprecht, V.; Hoffmann, M. Empirical comparison of four stereoscopic depth sensing cameras for robotics applications. arXiv 2025, arXiv:2501.07421. [Google Scholar] [CrossRef]
  51. Cutti, A.G.; Santi, M.G.; Hansen, A.H.; Fatone, S.; on behalf of the Residual Limb Shape Capture Group. Accuracy, repeatability, and reproducibility of a hand-held structured-light 3D scanner across multi-site settings in lower limb prosthetics. Sensors 2024, 24, 2350. [Google Scholar] [CrossRef]
  52. Leung, W.T.; Fu, S.C.; Chao, C.Y. Detachment of droplets by air jet impingement. Aerosol Sci. Technol. 2017, 51, 467–476. [Google Scholar] [CrossRef]
  53. Srndaljčević, V.; Bajči, B.; Šešlija, D.; Šulc, J.; Reljić, V.; Dudić, S.; Milenković, I. Image analysis as a method of quantifying the effectiveness of pneumatic nozzles. In Proceedings of the 3rd International Conference on Electrical, Electronic and Computing Engineering (IcETRAN 2016), Zlatibor, Serbia, 13–16 June 2016; ETRAN Society: Belgrade, Serbia, 2016; ISBN 978-86-7466-618-0. [Google Scholar]
  54. Intel® RealSense™ D400 Series Product Family Datasheet. Available online: https://www.intel.com/content/www/us/en/content-details/841984/intel-realsense-d400-series-product-family-datasheet.html (accessed on 5 February 2026).
  55. Grunnet-Jepsen, A.; Sweetser, J.; Khuong, T.; Dorodnicov, S.; Tong, D.; Mulla, O.; Eliyahu, H.; Rev, E.R. Intel® RealSense™ Self-Calibration for D400 Series Depth Cameras, p. 35. Available online: https://dev.realsenseai.com/docs/self-calibration-for-depth-cameras (accessed on 6 February 2026).
  56. RecFusion. Available online: https://www.recfusion.net/ (accessed on 6 February 2026).
  57. Curto, E.; Araujo, H. An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515. Sensors 2022, 22, 7378. [Google Scholar] [CrossRef]
  58. Sonoda, T.; Sweetser, J.N.; Khuong, T.; Brook, S.; Grunnet-Jepsen, A. High-Speed Capture Mode of Intel® RealSense™ Depth Camera D435, p. 16. Available online: https://dev.realsenseai.com/docs/high-speed-capture-mode-of-intel-realsense-depth-camera-d435 (accessed on 6 February 2026).
  59. Grunnet-Jepsen, A.; Sweetser, J.N.; Woodfill, J. Best-Known-Methods for Tuning Intel® RealSense™ Depth Cameras D400 Series for Best Performance, p. 11. Available online: https://dev.realsenseai.com/docs/tuning-depth-cameras-for-best-performance (accessed on 6 February 2026).
  60. EinScan Pro 2X & HD Series User Manual. Available online: https://www.tomega.lv/wp-content/uploads/2023/05/Shining-3D-Einscan-PRO-2X-2020-Product-info.pdf (accessed on 5 February 2026).
  61. EXScanPro. Available online: https://support.einscan.com/en/support/solutions/articles/60001048840-the-latest-software-for-einscan-pro-2x-v2-pro-hd (accessed on 6 February 2026).
  62. EinScan Pro 2X 2020 Specifications. Available online: https://visionminer.com/products/einscan-pro-2x (accessed on 24 May 2025).
  63. Kovynev, M.; Zaslavsky, M. Review of photogrammetry techniques for 3D scanning tasks of buildings. In Proceedings of the 28th Conference of Fruct Association, Moscow, Russia, 27–29 January 2021. [Google Scholar]
  64. Slootmaekers, T.; Slaets, P.; Bartsoen, T.; Malfait, L.; Vanierschot, M. Energy Saving Opportunities of Energy Efficient Air Nozzles. AIP Conf. Proc. 2015, 1702, 190019. [Google Scholar] [CrossRef]
Figure 1. CAD model of the device for automated quantification of the effectiveness of pneumatic nozzles: 1—conveyer belt, 2—blow-off station, 3—image capturing.
Figure 1. CAD model of the device for automated quantification of the effectiveness of pneumatic nozzles: 1—conveyer belt, 2—blow-off station, 3—image capturing.
Applsci 16 02053 g001
Figure 2. 2D measurement results of powder material blown out in the case of a 200 mm nozzle mounting height and 1 mm powder level.
Figure 2. 2D measurement results of powder material blown out in the case of a 200 mm nozzle mounting height and 1 mm powder level.
Applsci 16 02053 g002
Figure 3. Rotary table for driving powder material—wiring diagram scheme with control electronics parts.
Figure 3. Rotary table for driving powder material—wiring diagram scheme with control electronics parts.
Applsci 16 02053 g003
Figure 4. Camera calibration.
Figure 4. Camera calibration.
Applsci 16 02053 g004
Figure 5. Depth camera measutement: (a) experimental setup; (b) system components description.
Figure 5. Depth camera measutement: (a) experimental setup; (b) system components description.
Applsci 16 02053 g005
Figure 6. Scanner calibration.
Figure 6. Scanner calibration.
Applsci 16 02053 g006
Figure 7. 3D scanner measurement: (a) sheet with markers; (b) experimental setup; (c) system components description.
Figure 7. 3D scanner measurement: (a) sheet with markers; (b) experimental setup; (c) system components description.
Applsci 16 02053 g007
Figure 8. Flowchart of a single measurement.
Figure 8. Flowchart of a single measurement.
Applsci 16 02053 g008
Figure 9. Histogram of vertex positions.
Figure 9. Histogram of vertex positions.
Applsci 16 02053 g009
Figure 10. Approximation of model volume.
Figure 10. Approximation of model volume.
Applsci 16 02053 g010
Figure 11. Mass values on a scale for (a) full cup; (b) empty cup; and (c) bulk material.
Figure 11. Mass values on a scale for (a) full cup; (b) empty cup; and (c) bulk material.
Applsci 16 02053 g011
Figure 12. Graphical representation of the relationship between relative error and measurement mass for (a) depth camera, and (b) 3D scanner.
Figure 12. Graphical representation of the relationship between relative error and measurement mass for (a) depth camera, and (b) 3D scanner.
Applsci 16 02053 g012
Table 1. Area of blown powder in % at different pressure values and nozzle mounting heights.
Table 1. Area of blown powder in % at different pressure values and nozzle mounting heights.
Pressure [Bar]Height [mm]Powder Thickness [mm]Area of Blown Powder in %
6100162
1.550
248
200148
1.547
245
300117
1.51
20
4100137
1.529
228
200131
1.527
224
30010
1.50
20
2100119
1.513
210
20019
1.59
28
30010
1.50
20
Table 2. Devices technical comparison.
Table 2. Devices technical comparison.
FeatureShining 3D EinScan Pro 2X (2020) [62]Intel RealSense D435 [54]
PrimarypurposeProfessional 3D scanning of objects (CAD/reverse engineering)Robotic navigation, real-time depth sensing, AI applications
ScanningtechnologyStructured light (projected light/lasers)Stereo vision (Two IR cameras + wide IR projector)
AccuracyUp to 0.1 mm (handheld rapid mode)Error ≤ 2%, up to 2 m
ScanningspeedUp to 30 fps, 1,500,000 points/second (handheld rapid mode)Up to 90 frames per second (FPS)
Workingdistance300~500 mmWide: 20 cm to 10 m
Aligment modesMarker aligment/
Table 3. Mass sample values for five different measurements of repeatability and accuracy parameters.
Table 3. Mass sample values for five different measurements of repeatability and accuracy parameters.
Set 12345
Mass (g)Parameters assessedRepeatability500500500500500
Accuracy450400300150100
Table 4. Repeatability of volume measurements with a depth camera.
Table 4. Repeatability of volume measurements with a depth camera.
No.Flour Mass (g)Reference Volume
( m m 3 )
Resulting VolumeAbsolute Deviation from the Mean ValueRelative Error (%)
1.500952,422.13928,497.4776544.0380.059
2.935,927.62957974.1900.852
3.912,903.038215,050.4011.649
4.919,277.50968675.9300.944
5.943,161.542915,208.1031.612
Table 5. Volumes obtained by comparative mass measurement with a depth camera.
Table 5. Volumes obtained by comparative mass measurement with a depth camera.
No.Flour Mass (g)Reference Volume
( m m 3 )
Resulting VolumeAspect RatioAbsolute ErrorError in %
CoefficientVolume
(Obtained)
1.100190,484.43173,218.528455.360.367.2
2.150285,726.64284,045.7523.33333.270.072.1
3.300571,453.28534,778.82011.66671.740.074.2
4.400761,937.70742,087.98551.25001.250.000
5.450857,179.92801,792.63541.11111.160.054.5
Table 6. Repeatability of volume measurements with a scanner.
Table 6. Repeatability of volume measurements with a scanner.
No.Flour Mass (g)Reference Volume
( m m 3 )
Resulting VolumeAbsolute Deviation from the Mean ValueRelative Error (%)
1.500952,422.13957,085.25814,853.11521.552
2.984,978.22513,039.85181.324
3.976,016.1854077.81180.418
4.960,181.07311,757.30021.224
5.981,431.1259492.75180.967
Table 7. Volumes obtained by comparative mass measurement with a scanner.
Table 7. Volumes obtained by comparative mass measurement with a scanner.
No.Flour Mass (g)Reference Volume
( m m 3 )
Resulting VolumeAspect RatioAbsolute ErrorError in %
CoefficientVolume
(Obtained)
1.100190,484.43236,722.205454.110.8917.8
2.150285,726.64345,796.19083.33332.810.5215.6
3.300571,453.28732,144.71941.66671.330.3420.4
4.400761,937.70922,106.25021.25001.050.2016
5.450857,179.921,086,361.6941.11110.890.2219.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Šulc, J.; Reljić, V.; Jurošević, V.; Krstanović, L.; Banjac, B.; Santoši, Ž. The Use of Computer Vision Methodologies to Estimate the Volume of Powdered Substance Shapes. Appl. Sci. 2026, 16, 2053. https://doi.org/10.3390/app16042053

AMA Style

Šulc J, Reljić V, Jurošević V, Krstanović L, Banjac B, Santoši Ž. The Use of Computer Vision Methodologies to Estimate the Volume of Powdered Substance Shapes. Applied Sciences. 2026; 16(4):2053. https://doi.org/10.3390/app16042053

Chicago/Turabian Style

Šulc, Jovan, Vule Reljić, Vladimir Jurošević, Lidija Krstanović, Bojan Banjac, and Željko Santoši. 2026. "The Use of Computer Vision Methodologies to Estimate the Volume of Powdered Substance Shapes" Applied Sciences 16, no. 4: 2053. https://doi.org/10.3390/app16042053

APA Style

Šulc, J., Reljić, V., Jurošević, V., Krstanović, L., Banjac, B., & Santoši, Ž. (2026). The Use of Computer Vision Methodologies to Estimate the Volume of Powdered Substance Shapes. Applied Sciences, 16(4), 2053. https://doi.org/10.3390/app16042053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop