Next Article in Journal
Predictive Analysis of Ventilation Dust Removal Time in Tunnel Blasting Operations Based on Numerical Simulation and Orthogonal Design Method
Previous Article in Journal
Optimising Tubular Solar Still Performance with Gamma Aluminium Nanocoatings: Experimental Insights on Yield, Efficiency, and Economic Viability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hazelnut Kernel Percentage Calculation System with DCIoU and Neighborhood Relationship Algorithm

by
Sultan Murat Yılmaz
1,2,
Serap Çakar Kaman
1,* and
Erkan Güler
3
1
Faculty of Computer and Information Sciences, Department of Computer Engineering, Sakarya University, 54187 Sakarya, Turkey
2
Distance Education Application and Research Center, Giresun University, 28200 Giresun, Turkey
3
Faculty of Engineering, Department of Computer Engineering, Giresun University, 28200 Giresun, Turkey
*
Author to whom correspondence should be addressed.
Processes 2025, 13(8), 2414; https://doi.org/10.3390/pr13082414
Submission received: 24 June 2025 / Revised: 22 July 2025 / Accepted: 23 July 2025 / Published: 30 July 2025
(This article belongs to the Section Food Process Engineering)

Abstract

Hazelnut (Corylus avellana L.) is a significant global agricultural product due to its high economic and nutritional worth. The traditional methods used to measure the hazelnut kernel percentage for quality assessment are often time-consuming, expensive, and prone to human errors. Inaccurate measurements can adversely impact the market value, shelf life, and industrial applications of hazelnuts. This research introduces a novel system for calculating hazelnut kernel percentage utilizing a non-destructive X-ray imaging technique along with deep learning methods to assess hazelnut quality more efficiently and reliably. An image dataset of hazelnut kernels has been developed using X-ray technology, and defective areas are identified employing YOLOv7 architecture. Additionally, a novel bounding box regression technique called DCIoU and an algorithm for Neighborhood Relationship have been introduced to enhance object detection capabilities and to improve the selection of the target box with greater precision, respectively. The performance of these proposed methods has been evaluated using both the created hazelnut dataset and the COCO-128 dataset. The results indicate that the system can serve as a valuable tool for measuring hazelnut kernel percentages by accurately identifying defects in hazelnuts.

1. Introduction

Hazelnut (Corylus avellana L.) is recognized as one of the key agricultural commodities worldwide, occupying the third position in the global nut market with yearly production exceeding 863 thousand tons [1,2]. Furthermore, hazelnut fruit is distinguished by its high nutritional value and beneficial health properties [3,4]. With an average yearly production of 776,046 tons [5,6], Turkey leads the world in hazelnut production, which is strategically important for the nation’s agriculture and economy [7,8].
The quality of the product plays a crucial role in defining the economic and nutritional value of agricultural goods, influencing both consumer choices and buyer acceptance [9]. The high quality of fruit has made hazelnuts a sought-after commodity in both domestic and global markets [10,11]. Consequently, producers must adhere to quality standards, and industries must establish efficient evaluation systems to identify defects in hazelnuts [12].
Hazelnut quality is primarily determined by kernel percentage calculation (Equation (1)), typically performed during post-harvest sales in hazelnut purchasing centers [13]. In these centers, hazelnut kernel percentage calculation is performed mainly using traditional techniques, relying heavily on the expertise of hazelnut specialists [14]. However, since this approach depends on human judgment, it may result in subjective errors and complicate the identification of defects that are not easily seen by the naked eye (Figure 1). These circumstances adversely impact critical aspects such as storage duration, industrial applications, and equitable pricing of hazelnuts.
Although there are various methods for determining hazelnut quality, it has been stated in [11,15] that the most effective and widely used method is the healthy hazelnut kernel ratio (Equation (1)). According to [16], the mentioned kernel ratio has been identified as the key factor in evaluating quality based on the relationships among the different characteristics of hazelnuts. The healthy kernel ratio for hazelnuts is calculated by comparing the weight of acceptable hazelnut kernels with the weight of hazelnuts in shell, where the result is expressed as a percentage (Equation (1)). This process is referred to as the hazelnut kernel percentage measurement when assessing hazelnut quality. The determination of standards and compliance with these standards are of great importance in hazelnut kernel percentage measurement. In this regard, the Organization for Economic Co-operation and Development (OECD) and the United Nations Economic Commission for Europe (UNECE) have outlined minimum market and quality standards for hazelnuts in their reports [17,18]. In addition, hazelnut purchasing organizations, such as Turkish Grain Board and Fiskobirlik, might establish their own purchasing standards to account for variations in the annual growth cycles of hazelnuts.
H a z e l n u t K e r n e l P e r c e n t a g e ( % ) = K e r n e l W e i g h t H a z e l n u t W e i g h t × 100
For an accurate hazelnut kernel percentage calculation, it is essential to identify defects thoroughly and without mistakes. The percentage of defects in hazelnuts fluctuates annually from 1% to 15%, influenced by factors such as growing conditions and location [9]. More than 1 million tons of hazelnuts are produced per year, according to Food and Agriculture Organization Statistical Database (FAOSTAT), indicating that the amount of defective hazelnuts could range from 10,000 to 150,000 tons.
Defects in hazelnuts can occur in both the shell and the inner fruit [19,20,21]. Shell defects in hazelnuts are easily identifiable through visual inspection, whereas defects in the kernel may not be easily noticeable in certain instances. The final phase of the hazelnut kernel’s development involves the complete formation of the brown outer membrane. During this developmental process, external factors can cause damage, which may sometimes remain hidden beneath the membrane (Figure 1) [12,22]. When defects are not visible to the naked eye, hazelnuts can be evaluated by cutting them. However, flaws located between the membrane and the kernel might not be identified through this approach either [9]. Conventional methods for hazelnut kernel percentage measurement tend to be time-consuming, costly, and limited in their effectiveness, failing to identify lipid oxidation and internal kernel defects [23,24]. Moreover, traditional techniques are based on human involvement, making it difficult to standardize the error rate and control it effectively [25]. Additionally, most conventional methods involve destructive, time-consuming, and expertise-dependent procedures, making them inadequate in terms of both practicality and efficiency for large-scale analyses [26]. For these reasons, the use of non-destructive evaluation methods is becoming increasingly important, particularly for detecting internal defects in agricultural products that are difficult to identify through visual inspection [27]. Systems supported by non-destructive food evaluation technologies offer significant advantages over traditional approaches, such as high accuracy, repeatability, processing speed, and operator-independent decision-making, thereby providing a more reliable and efficient alternative [28]. In this context, the use of non-destructive methods has become an inevitable necessity for the accurate and standardized detection of defects in hazelnut kernels [29].
Non-destructive techniques are vital to maintain the commercial value of hazelnuts by enhancing the precision of quality assessment. Advancements in technology have become essential for food quality monitoring and have started to phase out conventional methods that are labor intensive and rely on human judgment [23,30]. Non-destructive methods provide a more efficient and standard quality assessment [24,29]. Optical (spectral), acoustic, ultrasonographic, radiographic (X-ray, computerized tomography), and electrical-electronic methods stand out among the non-destructive methods widely used in food processing [25,31]. These methods provide better features in the quality control of food products. In particular, X-ray imaging technique successfully detects damages on the outer and inner surfaces of fruits [32]. In calculating the percentage of hazelnut kernels, identifying visible and hidden defects on the surfaces of the kernels is crucial; it is essential to detect flaws that may not be visible to the human eye (Figure 2). It has also been noted that X-ray technology can be utilized in the food sector without negatively impacting human health [33]. The integration of non-destructive food processing technologies along with machine learning, computer vision, and artificial intelligence is on the rise [34,35,36]. These technologies are transforming the efficiency of processes by providing substantial accuracy, reliability, and stability in assessing food quality and safety [37]. The use of this integration to classify hazelnuts and evaluate quality will be highly beneficial.
Deep learning-based methods have been widely used in computer vision and object recognition fields in recent years, and successful results have been obtained [38,39,40,41]. The application of these methods to the hazelnut classification and object recognition processes provides significant advantages in terms of labor, time, and cost. In [42], 17 shelled hazelnut varieties were classified using a convolutional neural network (CNN) due to its high success in the field of computer vision. Another study used a mobile application built with the InceptionV3 and ResNet50 models to classify different types of hazelnuts [43]. Additionally, a system was created to identify unwanted hazelnuts during the selection process, using InceptionV3 and EfficientNet models [44]. The authors of [45] developed a dataset for three varieties of hazelnuts utilizing BigTransfer (BiT) models and conducted the classification procedure. Numerous studies are available focusing on the classification of hazelnut kernels and shelled hazelnuts by using RGB cameras that employ deep learning, machine learning, and computer vision techniques [46,47,48,49,50]. There are also works in which hazelnuts are classified according to their quality using non-destructive quality assessment methods [51,52,53,54]. Current studies mainly classify hazelnuts solely based on their general defects. However, what truly matters in the hazelnut kernel percentage calculation process is accurately identifying and classifying the specific areas where defects are present.
In existing literature, there are no studies that employ non-destructive quality assessment and deep learning techniques for estimating hazelnut kernel percentage using X-ray imaging. Our research utilizes X-ray imaging to identify defective regions within hazelnuts. Furthermore, there is a lack of any publicly available X-ray dataset on hazelnuts in academic or open data sources. In this regard, one of the novel aspects of our work is the development of a dataset of X-ray hazelnut kernels.
Recent advancements in deep learning have greatly contributed to the successful application of non-destructive food processing techniques [34,35]. In particular, bounding box-based deep learning approaches are widely preferred due to their advantages, such as high accuracy, low error rate, fast processing time, wide range of applications, and the ability to effectively use structural information [55,56]. These characteristics are especially crucial in tasks like object detection and localization. However, despite the benefits of bounding box-based approaches, several significant problems remain to be addressed [56,57]. These problems typically revolve around identifying suitable bounding boxes [58] and reducing the error rates in bounding box regression [59,60]. Accurate creation of target boxes and minimizing regression errors have a direct impact on both the precision of object localization and the overall effectiveness of the model [61]. In this regard, numerous methods have been introduced to enhance the precision of bounding box localization. For instance, the Object-Aware Multiple Instance Learning (OA-MIL) method has been developed to improve the accuracy of bounding box determination [62]. The authors of [63] provide a novel technique aimed at enhancing the accuracy of bounding box detection, whereas the work in [64] presents an RBP-Pose model to compute object pose estimation and surface projections more precisely. The Alpha-Refine (AR) technique, which is a versatile and responsive enhancement module, has been utilized to improve the precision of bounding box estimations [65]. Moreover, metrics such as Intersection over Union (IoU) [66], Generalized Intersection over Union (GIoU) [67], Distance Intersection over Union (DIoU) [68], and Complete Intersection over Union (CIoU) [68] stand out among the bounding box regression methods widely used in object detection. These approaches enhance the accuracy and reliability of object detection models and provide more effective solutions in non-destructive testing and other visual perception applications.
The main research trends in bounding box-based object detectors are focused on improving bounding box regression and developing new methods for accurate bounding box estimation. In this study, a hazelnut kernel percentage calculation system is presented using X-ray imaging and deep learning methods to determine hazelnut quality. While most of the existing literature employs general visual analysis methods based on RGB or surface images [42,43,44,45,46,47,48], this study utilizes X-ray imaging and deep learning to perform regional defect detection and segmentation on hazelnut kernels. Additionally, a novel bounding box regression method, called DCIoU, is introduced to address the object localization problem in bounding box-based deep learning methods. DCIoU provides more precise box regression in cases where conventional IoU-based methods fall short. Furthermore, a Neighborhood Relationship Algorithm is proposed and integrated into the YOLOv7 architecture to enable more effective target box selection during the object detection process. This algorithm enhances the decision-making mechanism by considering contextual relationships in the vicinity of the bounding boxes. The proposed methods were tested on an X-ray image dataset created for hazelnut defect detection, and the results were compared. In addition, experiments were conducted on the COCO-128 [69] dataset to evaluate the generalizability of the DCIoU method. In this respect, the study contributes to the literature by promoting the use of X-ray-assisted deep learning in non-destructive quality control of food products and by proposing new regression and selection strategies in bounding box-based object detection.
The main contributions of this article, in accordance with the information given above, are as follows:
  • X-ray hazelnut image dataset has been created.
  • All hazelnut defects have been detected in hazelnut kernel percentage calculation.
  • A new bounding box regression method DCIoU has been developed for anchor based deep learning methods.
  • A Neighborhood Relationship Algorithm that can make a more appropriate bounding box selection for bounding box-based deep learning methods has been developed.
  • A system model that minimizes errors that may occur while determining hazelnut kernel percentage has been proposed by performing the above-mentioned operations.

2. Materials and Methods

2.1. Hazelnut Quality Assessment System

In conventional methods for assessing hazelnut quality, an expert selects a random 250 g sample from the batch that will undergo kernel percentage calculations [13]. The selected hazelnuts are removed from their shells and processed into hazelnut kernels. Kernels that are deemed unhealthy (such as rotten, wrinkled, or abortive) are identified by visual inspection. However, specific criteria can vary depending on the guidelines set by the purchasing center. Hazelnut quality is determined by calculating the ratio of healthy kernel weight to total batch wight.
Traditional techniques can be quite time-consuming and have a significant margin of error. Assessments performed by the hazelnut expert face challenges such as errors introduced by human factors and the difficulty in identifying certain hazelnut defects, which can result in inaccurate measurements. In addition, a lack of standardization in assessments may lead to inconsistent results when different experts carry out measurements. These factors make conventional kernel percentage measurement techniques inadequate for determining quality. This study provides a system that detects hazelnut defects effectively and quickly to calculate kernel percentage (Figure 3).
As illustrated in Figure 3, the hazelnut quality assessment system comprises four distinct stages. Initially, in the random hazelnut selection process, samples are collected from the hazelnuts for which the kernel percentage is to be evaluated. The apparatus shown in Figure 4 is a manually operated tool used to obtain homogeneous samples from hazelnuts stored in bulk or bags. Thanks to the holes on its surface, it allows product collection from different points, ensuring a reliable sample for analysis. The apparatus shown in Figure 4 is used to select hazelnut samples, which are then placed in a container for processing. The gathered hazelnuts are mixed to achieve a uniform distribution. Subsequently, a random sample of 250 g is chosen for quality evaluation and undergoes the kernel percentage calculation process. In the hazelnut cracking step, the randomly selected hazelnuts are processed on a specialized hazelnut cracking machine, which results in shells and kernels. During this phase, kernels and shells are separated, and kernels are directed to the X-ray imaging stage to identify healthy hazelnuts.
In the “X-ray Imaging” phase, images of the sample hazelnuts are produced. Finally, in the classification and kernel percentage calculation process, the X-ray images are classified into non-defective, slightly-defective and defective hazelnuts utilizing proposed deep learning techniques. The classified hazelnuts are included in the kernel percentage calculation based on the acceptance criteria set by the hazelnut purchasing centers. As the final phase, the moisture content of the hazelnuts is measured. An acceptable moisture measurement is achieved using an appropriate moisture measurement device. If both the moisture level and hazelnut kernel percentage meet the standards accepted by the hazelnut purchasing center, the hazelnuts are accepted.
At hazelnut purchasing centers, the implemented system eliminates frequent errors encountered in conventional kernel percentage calculation methods. This allows for a more precise and reliable assessment of key factors, including the storage durability of hazelnuts, the industrial process they may serve, and fair pricing.

2.2. Dataset Creation

In this research, the primary hazelnut species such as Foşa, Palaz, Çakıldak, Tombul, and Sivri cultivated in Samsun, Ordu, and Giresun, which are the main production regions of Turkey [6], were used to create the dataset. From each city, a total of 75 kg of hazelnuts were collected in 5 kg batches per variety, including non-defective, slightly defective, and defective samples. Due to the overrepresentation of non-defective hazelnuts among the collected samples, a balanced subset was created to ensure healthier model training, including a similar number of samples for the defective, slightly defective, and non-defective classes. Accordingly, a total of 1319 X-ray images were included in the dataset (Figure 5).
The collected hazelnuts were processed into kernel form. Each hazelnut kernel was scanned using an X-ray device, producing images with a resolution of 640 × 640 pixels. These X-ray images were categorized into three classes—non-defective, slightly defective, and defective—and labeled accordingly to construct the dataset. A soft X-ray technique was employed during the imaging process to obtain more precise and suitable results. The X-ray imaging device operates within an adjustable voltage range of 40–150 kV and is optimized for low-dose applications. The system incorporates a digital flat-panel detector with an approximate pixel size of 100 µm. With a Detective Quantum Efficiency (DQE) of 70% and 16-bit depth, the detector enables the acquisition of high-contrast and sharp images even at low dose levels. Of the dataset, 70% of the images were allocated for training and 30% for testing. A 5-fold cross-validation was performed to determine the most suitable training and test sets.

2.3. Preprocessing Stage

In order to ignore the background outside the hazelnut, the Otsu filter was used on all hazelnut images, and areas outside the hazelnut were converted to white. The following filters were sequentially applied to the images in the dataset to facilitate learning: Contrast Limited Adaptive Histogram Equalization (CLAHE), Gaussian Blur, and Anisotropic Diffusion (Figure 6). These filters highlight defective regions and provide more accurate and effective results in bounding box detection. Moreover, the transformation of the outside regions to white significantly reduces the likelihood of incorrectly predicting boundary boxes, thus improving the performance of the utilized deep learning model. In this way, the developed Neighborhood Relationship Algorithm helps to detect bounding boxes more effectively. Thus, the hazelnut image to be used is prepared for the deep learning process.

2.4. Deep Learning Method

In the developed system, YOLOv7 [70] deep learning method was used for the classification of hazelnut, since it stands out as one of the methods with the highest performance among bounding box-based deep learning techniques [71]. It was also preferred because it integrates well with the developed Neighborhood Relationship Algorithm. YOLO architecture divides the image into grids and draws boxes to classify each region. The probability of finding an object in each region is calculated, and a confidence score is determined for each box. The confidence score expresses the similarity of the object found in each anchor to the predicted object as a percentage, and the classification process is carried out according to this score.

2.5. Bounding Box Regression

The primary role of object detection is to identify all relevant targets in a given image, categorize the detected objects, and generate bounding boxes for their localization. In bounding box-based detector methods, improving the bounding box regression is a crucial element that influences the efficacy of object recognition [72]. Currently, solutions are being sought to address the localization challenges associated with bounding box-based methods. Bounding box regression is applied to adjust the position and dimensions of the predicted bounding box, making it a vital part of the object detection workflow. The development of an appropriate loss function for bounding box regression has garnered significant attention. One of the most commonly employed approaches in both literature and bounding box-based detectors is the IoU metric. However, the IoU metric can negatively impact bounding box regression, leading to slow convergence and inaccurate regression outcomes. When two objects do not overlap or when one fully encloses another, the IoU acts as a loss function with a zero derivative. This results in IoU not being able to accurately depict the spatial relationship between objects and limits its role in the optimization process [73]. Consequently, estimating target bounding boxes correctly becomes challenging. GIoU offers a partial resolution to some of the issues with the IoU metric in situations involving non-overlapping boxes, yet it fails to remedy the problems of slow convergence and erroneous regression. DIoU incorporates the normalized distance between the predicted bounding box and the actual target bounding box, thereby enabling faster convergence during training. CIoU further enhances performance by integrating three key geometric components into the regression loss calculation. Additionally, Focal EIoU [74] is a loss function designed to enhance both localization accuracy, and convergence speed. Recent innovations like SIoU [75] and LCornerIoU [76] strive to further improve the precision and effectiveness of bounding box regression. Despite the proliferation of IoU-based bounding box regression loss functions, a notable disparity still exists between the predicted outcomes and the actual bounding boxes during the optimization phase.

2.5.1. Intersection over Union (IoU)

IoU represents a basic metric used to evaluate the overlap region between the ground-truth boxes ( b g ) and the target boxes ( b p ) . IoU, as a distance metric, stands out as a loss function with properties such as non-negativity, identity, and symmetry. Moreover, since it is a scale-independent metric, the IoU value does not change even if the dimensions of the objects change as long as the spatial overlap ratio between them remains the same. The mathematical formula for the IoU loss is given in Equation (3).
I o U = ( b g b p ) ( b g b p ) = I U
L I o U = 1 I o U
In order to maintain generality in the derivation process and minimize the occurrence of negative signs, each bounding box occurring in an image can be modeled as a four-dimensional vector (Equation (4)).
b = ( x , y , w , h )
The derivative of L I o U with respect to its localization can be calculated with respect to b (Equation (5)).
L I o U b = ( 1 I U ) b
When b g and b p do not overlap, the IoU value is zero, in which case the relative position relationship (i.e., adjacency, distance) between the predicted bounding box and the actual bounding box cannot be determined directly. In these cases, since the IoU values are zero, the result of Equation (5) is also zero, leading to the vanishing gradient problem. In addition, the IoU loss becomes stationary at a certain point, making it difficult to advance the optimization process. In order to overcome this problem, various variants based on the IoU loss have been developed. Methods such as GIoU, DIoU, and CIoU have been improved by adding penalty terms (P-Penalty) to the IoU method and are represented in the functional form in Equation (6).
L i o u b a s e d = 1 I o U + P ( b i b ˜ i )

2.5.2. Generalized Intersection over Union (GIoU)

The GIoU loss function was developed to overcome the limitations of IoU when there is no overlap between the boxes. This method aims to speed up the learning process by adding an additional penalty term over the minimum area ( b e ) covering the predicted and ground-truth boxes. However, in cases of full coverage, since the b e box remains fixed, GIoU behaves the same as IoU and the learning process may be negatively affected. The penalty formula for GIoU is given in Equation (7).
P G I o U = b e ( b g b p ) b e
G I o U = I o U + P G I o U
L G I o U = 1 G I o U

2.5.3. Distance Intersection over Union (DIoU) and Complete Intersection over Union (CIoU)

According to DIoU, the important issue in determining the overlap between two bounding boxes is the distance between the centers of the two boxes ( ρ ( b c g , b c p ) ) and the diagonal length ( b d e ) of the smallest rectangle covering the two boxes. This metric takes into account not only the overlap ratio of the boxes but also the spatial alignment of the boxes. The DIoU penalty formula is given in Equation (10).
P D I o U = ρ 2 ( b c g , b c p ) ( b d e ) 2
D I o U = I o U + P D I o U
L D I o U = 1 D I o U
It can be observed that the DIoU loss tries to reduce the distance between the centers of two bounding boxes. This feature is designed to optimize not only the overlap ratio of the boxes but also their spatial proximity. However, when the center points of two boxes overlap, the penalty term becomes zero, and the DIoU loss turns into the IoU loss.
CIoU works similarly to DIoU, but goes a step further by taking into account the aspect ratio ( a v ) between the width and height of the boxes. The CIoU penalty formula is given in Equation (13).
P C I o U = ρ 2 ( b c g , b c p ) ( b d e ) 2 + a v
v = 4 π π ( arctan w g h g arctan w p h p ) 2
a = 0 , i f   I o U < 0.5 V ( 1 I o U ) + V , i f   I o U 0.5
C I o U = I o U + P C I o U
L C I o U = 1 C I o U

2.6. The Proposed Methods

In this study, a novel system is proposed to improve the accuracy of hazelnut kernel percentage estimation from X-ray images (see Section 2.1). The proposed system integrates two innovative methods aimed at enhancing detection accuracy and the overall performance of the model. The first method, Distance Center Intersection over Union (DCIoU), is an improved version of traditional IoU-based loss functions. This approach reduces regression losses between predicted and ground-truth bounding boxes, enabling more precise convergence during training. The second method, the Neighborhood Relationship Algorithm, facilitates the selection of predicted bounding boxes and is also effectively utilized during the segmentation stage. The combined application of these two methods significantly improves the accuracy and robustness of the system for detecting and evaluating hazelnut kernels percentage in X-ray images.

2.6.1. Distance Center Intersection over Union (DCIoU)

Bounding box regressions are calculated by object detectors between the ground-truth ( b g ) and target ( b p ) boxes. Figure 7 illustrates the b g , b p , and b e (enclosing box) boxes, where b c g , b c p , and b c e denote the ground-truth box center, the prediction box center, and the enclosing box center, respectively. Bounding boxes are rectangular and are mathematically expressed as ( x , y , w , h ) (Equation (4)). In this expression, x and y are the coordinates of the box, and w and h are the width and height.
In CIoU, when I o U 0.5 (Equation (15)), the penalty score will have no effect and CIoU will behave like DIoU. IoU, GIoU, DIoU, and CIoU are effective bounding box regression methods widely used in the literature. Despite their certain advantages, these approaches still exhibit limitations in certain situations (Figure 8). In this study, a new method called DCIoU is proposed to improve the bounding box regression. Equation (20) is used for DCIoU.
P D C I o U = ( ρ 2 ( b c g , b c e ) , ρ 2 ( b c p , b c e ) ) m a x ρ 2 ( b c g , b c e ) + ρ 2 ( b c p , b c e ) + ρ 2 ( b c g , b c p )
P D C I o U = ( ρ 2 ( b d g , b d p ) ) m i n ( ρ 2 ( b d g , b d p ) ) m a x λ ( if the centers are equal , λ = 0.01 )
D C I o U = ( I o U + P D C I o U ) n o r m a l i z e
L D C I o U = 1 D C I o U
In previously developed IoU methods, the overlap ratio of two boxes or the overlap differences between the height and width of two boxes were used to solve the regression problem between bounding boxes. These approaches can sometimes cause the regression ratio of bounding boxes to be dysfunctional and lead to an exploding gradient problem. To mitigate this issue, the center of the smallest area of the union of the two boxes ( b c e ) is incorporated into the procedure.
In the DCIoU method, as described in Algorithm 1, the first step is to determine whether the two boxes are identical. If the two boxes are equal, then DCIoU is set to 1. Following this, the centers of the boxes are checked. To ensure that DCIoU is more effective than IoU, the ratio of diagonal distances is used (Equation (19)). In addition to the existing IoU calculation, the DCIoU method adds the ratio of the largest distance measurement among ρ ( b c g , b c e ) and ρ ( b c p , b c e ) to the total value of the distances between ρ ( b c g , b c e ) , ρ ( b c p , b c e ) and ρ ( b c g , b c p ) for all other cases. The distances between points are calculated using the Euclidean distance formula. The DCIoU calculation specified in Equation (18) can yield values ranging from ( 0 , 1.5 ] . This range is then normalized to fall within ( 0 , 1 ) and arranged to be used in all object recognition detectors (Equation (20)).
Algorithm 1 DCIoU Algorithm
Input: Prediction box b p and ground-truth box b g coordinates: b g = ( x g y g h g w g ) , b p = ( x p y p h p w p )
Output: DCIoU
 1: 
Standardize the coordinates of the boxes to indicate the upper left and lower right corners: b g = ( x 1 g , y 1 g , x 2 g , y 2 g ) , b p = ( x 1 p , y 1 p , x 2 p , y 2 p ) , b e = ( x 1 e , y 1 e , x 2 e , y 2 e ) (Coordinates of the union of b g , b p , and b e )
 2: 
Calculate the areas of b g and b p : A g = ( x 2 g x 1 g ) x ( y 2 g y 1 g ) , A p = ( x 2 p x 1 p ) x ( y 2 p y 1 p )
 3: 
Calculate the intersection and union areas b g and b p : W I = ( m a x ( x 1 g , x 1 p ) m i n ( x 2 g , x 2 p ) ) , H I = ( m a x ( y 1 g , y 1 p ) m i n ( y 2 g , y 2 p ) ) , I n t e r = W I × H I , U n i o n = A g + A p I n t e r
 4: 
I o U = I n t e r U n i o n
 5: 
if  b g = b p : return  D C I o U = 1.0
 6: 
Calculate the box centers: b o x _ c e n t e r _ x = ( x 1 + x 2 ) / 2 , b o x _ c e n t e r _ y = ( y 1 + y ) / 2 , b c = ( b o x _ c e n t e r _ x , b o x _ c e n t e r _ y )
 7: 
Calculate the diagonal distances of the boxes: b d g = | ( x 1 g , y 1 g ) , ( x 2 g , y 2 g ) | , b d p = | ( x 1 p , y 1 p ) , ( x 2 p , y 2 p ) |
 8: 
if  b o x _ c e n t e r g = b o x _ c e n t e r p :
return  D C I o U = I o U + ( ρ 2 ( b d g , b d p ) m i n ( ρ 2 ( b d g , b d p ) ) m a x × λ
 9: 
Calculate DCIoU: D C I o U = I o U + ( ρ 2 ( b c g , b c e ) ) , ( ρ 2 ( b c p , b c e ) ) m a x ( ρ 2 ( b c g , b c e ) ) + ρ 2 ( b c p , b c e ) + ρ 2 ( b c g , b c p ) )
10: 
Normalize the D C I o U value and finish: return  n o r m a l i z e d _ D C I o U
The dynamic penalty term of DCIoU optimizes the center alignment of the boxes, providing a more accurate and reliable measurement in cases where IoU and its derivatives are insufficient. This approach evaluates both the shape compatibility of the bounding boxes and the distances between their centers, helping the model to converge more accurately and to achieve a lower loss value. In particular, as shown in Figure 8a, when the bounding boxes have the same centers and aspect ratio, and also when one box completely covers the other, it causes GIoU, DIoU, and CIoU to regress to the IoU value. A similar situation is also seen in Figure 8d. Analyzing Figure 8b reveals that CIoU regresses to DIoU because the boxes maintain the same aspect ratio. On the other hand, in Figure 8c, overlapping centers of the boxes cause the DIoU to drop to the IoU level. However, the loss value derived from the proposed method in all situations is greater than that of other methods, which assists in reducing regression errors and minimizing the loss function, particularly in challenging cases.

2.6.2. Neighborhood Relationship Algorithm

Most of the improvements in existing bounding box-based detectors focus on estimating the most suitable target boxes and reducing regression errors. However, determining the appropriate bounding boxes as well as selecting the most suitable among multiple detected boxes is critical to object detection performance.
YOLOv7 detects objects of different sizes by using multi-scale feature maps. From the input image of 640 × 640 , YOLO model produces three scaled outputs of 80 × 80 , 40 × 40 , and 20 × 20 grid sizes, respectively. The estimates obtained from these three different scales contain a total of 25200 tensor elements. Each tensor element contains information structured as [ x , y , w , h , o b j s k r , c l s 1 , c l s 2 , , c l s n ] . The parameters x, y, w, h represent the location and size information of the bounding box. The value o b j s k r expresses the probability of finding an object within the relevant bounding box and takes a value between 0 and 1. All c l s values represent the probability distribution, where each c l s corresponds to the probability value that the bounding box belongs to a certain class and varies between 0 and 1.
Selecting the most appropriate bounding boxes from the tensors generated by YOLO model is crucial to enhancing the accuracy and efficiency of the model. In this regard, the Neighborhood Relationship Algorithm (Algorithm 2) is introduced, which leverages the K-Means algorithm to achieve a more precise determination of bounding boxes during object estimation. The objective of this proposed approach is to perform both image segmentation and to select the most suitable bounding box. The Neighborhood Relationship Algorithm not only accurately identifies the target bounding boxes, but also facilitates a more effective box selection by integrating it into the Non-Maximum Suppression (NMS) process.
NMS is a method used to prevent duplicate predictions in object detection. First, predictions with confidence scores below a specified threshold are eliminated. IoU values of the remaining boxes are calculated, and those that exceed the specified threshold are removed. This process is repeated for all boxes, leaving only the most reliable predictions for each object.
The developed Neighborhood Relationship Algorithm uses the K-Means algorithm to determine the boundary boxes more consistently and to take into account the spatial relationships. While NMS keeps only the highest scoring box, this algorithm analyzes the distance, direction, and distribution of neighboring boxes to make more balanced selections. Thus, in scenes with dense objects, incorrectly eliminated boxes are preserved, and object losses are reduced.
The Neighborhood Relationship Algorithm initially transforms the images obtained from the preprocessing phase into two-dimensional (2D) matrices in tensor format, assigning pixel values to these matrices (Figure 9b). A primary objective of the preprocessing phase is to foster a more balanced clustering procedure by achieving homogeneity in the image histogram. This approach helps to produce more reliable clustering outcomes by mitigating intensity discrepancies. Clustering is executed based on the pixel intensities using the K-Means algorithm on the matrix where the pixel values have been assigned. Following the clustering process, each element of the matrix is assigned the cluster values to which it belongs. Subsequently, starting from the first element of the matrix that has been assigned cluster values, its neighbors are identified, as illustrated in Figure 9a.
Algorithm 2 Neighborhood Relationship Algorithm
Input: The image obtained during the preprocessing stage
Output: Tensor
 1:
Conversion of Images to Tensor Format
  • Convert the images obtained by preprocessing into 2-dimensional (2D) matrices in tensor format.
 2:
K-Means Clustering Process
  • Perform clustering process on the matrix with K-Means method according to pixel values (k = 9).
  • Assign the cluster value to each pixel according to its corresponding cluster.
 3:
Start Neighborhood Check
  • Start from the first element of the matrix.
  • Starting from the first element of the matrix, check all neighbors of the element (top, bottom, right, left and diagonals).
  • For each neighboring element:
    -
    Check if it belongs to the same cluster.
    -
    If it belongs to the same cluster, it is added to the list and its neighbors are checked as well.
  • Create a control matrix so that the processed elements are not checked again and assign the value −1 to the checked elements in this matrix.
  • Store the pixel coordinates and cluster values belonging to the same cluster in the tensor (Figure 9d).
 4:
Process All Elements
  • Repeat step 3 until all elements of the matrix are checked.
 5:
Create bounding boxes
Throughout the control process, a new control matrix is generated in a specified size to enhance the algorithm’s performance. This mechanism prevents the need to reprocess matrix elements that have already been examined (Figure 9c). As illustrated in Figure 9d, each element in the matrix was systematically scanned, and the coordinates of element groups associated with each cluster were recorded in a tensor. This tensor served as the primary data source for creating bounding boxes. However, to reduce the adverse effects of ambient noise that may affect X-ray images during the bounding box formation, a threshold value was established to disregard very small clusters. This approach mitigates the risk of erroneous bounding boxes arising from ambient noise and enhances the imaging performance of the system in a more efficient and reliable way. Finally, target bounding box points are derived from the boundary values of each cluster point in the tensor (Figure 10). The method developed significantly improves the precision and efficiency of boundary box determination.
The predicted bounding boxes generated from the Neighborhood Relationship Algorithm are utilized in the NMS algorithm within YOLO. NMS ranks these predicted boxes based on their confidence score values, discarding all boxes that fall below a specified threshold. A similarity assessment is performed between the boxes that meet the threshold criteria and those identified by the Neighborhood Relationship Algorithm using the DCIoU method. Through this evaluation, the boxes that surpass the IoU threshold are defined as the final outputs of the NMS algorithm. The segmentation procedure concludes by marking the pixel values associated with the cluster label within the bounding boxes derived from the Neighborhood Relationship Algorithm in the final output. This method minimizes the occurrence of overlapping predicted boxes, a common issue in object detection, while ensuring the selection of the most suitable bounding boxes. Consequently, the model predictions are optimized, resulting in more consistent and reliable results.

3. Results and Discussion

3.1. Tests for the DCIoU Method

The evaluation of the proposed DCIoU method was conducted in two phases. The initial phase involved simulation tests referenced in [68,77]. The subsequent phase assessed the effectiveness of the IoU metrics based on YOLOv7 training results from both the X-ray hazelnut dataset and the COCO-128 dataset.
In order to assess the simulation experiment proposed by Zheng et al. [68], the performances of the loss functions IoU, GIoU, DIoU, CIoU and DCIoU were compared. For this experiment, real bounding boxes were generated, centered at (0.5, 0.5) with an area of 1/32. These boxes were created with seven distinct aspect ratios: 1:4, 1:3, 1:2, 1:1, 2:1, 3:1, and 4:1. Subsequently, 5000 anchor points were evenly distributed in a circular region centered at (0.5, 0.5) with a radius of 0.5 units. At each anchor point, 49 anchor boxes were placed, using seven different scales (1/32, 1/24, 3/64, 1/16, 1/12, 3/32, and 1/8) alongside the seven aspect ratios mentioned above. Consequently, a total of 1715000 regression samples were generated (7 × 7 × 7 × 5000), with each anchor box matched to the corresponding ground-truth bounding box (Figure 11a). In the experiment, the Adam optimization method with a learning rate of 0.01 was used, and the model was trained for 120 epochs. The experimental results are shown in Figure 11b. It is observed that the IoU loss experiences the slowest decline and optimization halts after a certain point. Although the GIoU loss demonstrates an improved convergence compared to the IoU, it remains at high values. DIoU and CIoU losses show a faster decrease trend and perform better compared to IoU and GIoU. The DCIoU function shows the fastest convergence, achieving lower loss values considerably earlier than the other methods. These findings demonstrate that DCIoU stands out as the most effective loss function, especially in applications where bounding box regression is critical, such as object detection. Previous IoU-based methods, including IoU, GIoU, DIoU, and CIoU, generally calculate regression losses based on overlap ratios or dimension differences between bounding boxes. However, as shown in Figure 8, these methods may fail to overcome regression errors under various overlap scenarios, leading to stagnation or slow convergence during model training. To overcome these limitations, the proposed DCIoU incorporates not only the distance between the centers of the boxes but also the minimum enclosing area and its centroid in the function computation. Consequently, DCIoU provides a more stable learning process and achieves higher accuracy in object detection.
In another simulation experiment [77], the position of the prediction box is updated iteratively using gradient descent with various loss functions, including IoU, GIoU, DIoU, CIoU, and DCIoU. The prediction box at iteration t is denoted as B t = ( x t , y t , w t , h t ) . B t 1 represents the gradient of the regression loss function concerning the prediction box, while B t 1 = ( x t 1 , y t 1 , w t 1 , h t 1 ) is the prediction box from the previous iteration t 1 . The purpose of the simulation is to analyze the convergence behaviors of various regression methods and to evaluate their convergence times and accuracy performance. In this regard, the simulation was carried out for a total of 200 iterations.
The results presented in Figure 12 show that the predicted boxes gradually converge to the target boxes as the number of iterations of the IoU, GIoU, DIoU, CIoU, and DCIoU metrics increases. Each metric was observed to have different convergence dynamics and the DCIoU metric provided higher accuracy of positioning compared to the other metrics at the end of 200 iterations. This observation indicates that DCIoU facilitates a more efficient convergence during the optimization process and can be regarded as a more reliable regression metric in applications such as object detection.
The DCIoU method offers several advantages over the CIoU, DIoU, and GIoU methods. Primarily, it enables for more accurate alignment of object boxes by assessing the distance between box centers in a more detailed and discrete manner. While CIoU and DIoU methods rely on width and height distance calculations, DCIoU enhances the loss function optimization by employing direct distance measurements between box centers. Furthermore, it exerts a stronger influence on box dimensions and proportions, thereby managing regression losses more effectively. It also improves the overall model performance by more accurately evaluating the non-intersection cases that GIoU addresses.
DCIoU facilitates faster and potentially more efficient optimization during the model training process by better accounting for differences in box centers and sizes. As a result, the DCIoU method tends to offer improved alignment and size estimation of object boxes compared to other metrics, based on the results observed in this study.
The X-ray hazelnut dataset and the COCO-128 dataset were trained using the latest PyTorch (version 2.7.1) implementations of YOLOv7. Throughout all experiments, the CSPDarkNet network architecture served as the backbone network of YOLOv7. Training was conducted with standard parameters and followed the suggested training protocols. The X-ray hazelnut dataset and the COCO-128 dataset were trained for 150 and 200 iterations, respectively.
As presented in Table 1, precision refers to the proportion of true positives among all instances predicted as positive by the model, while recall indicates the proportion of actual positive instances that were correctly identified. mAP@50 (mean average precision at an IoU threshold of 0.5) represents the average accuracy when the predicted bounding boxes overlap with the ground-truth boxes by at least 50%. In contrast, mAP@50-95 evaluates the model’s overall performance more comprehensively by calculating the mean average precision across multiple IoU thresholds ranging from 0.5 to 0.95.
Table 1 presents the precision, recall, mAP@50, and mAP@50-95 results of the trainings performed on the COCO-128 and X-ray hazelnut datasets using YOLOv7. The obtained results indicate that the DCIoU method exhibits higher accuracy and overall performance compared to the other metrics in both datasets. In the X-ray hazelnut dataset, DCIoU reaches the highest accuracy rates with 0.9448 precision, 0.8860 recall, and 0.9373 mAP@50 values, while it exhibits a competitive performance with 0.7901 mAP@50-95 value. In the COCO-128 dataset, DCIoU achieves the best performance among all metrics by reaching high values such as 0.9348 precision, 0.8816 recall, 0.9632 mAP@50, and 0.7860 mAP@50-95. These findings reveal that DCIoU provides a significant performance over other methods in terms of accuracy, recall rate, and overall optimization success in the object detection process. The notable improvement in mAP@50-95 values suggests that DCIoU achieves better optimization in aligning boxes and positioning objects. These results suggest that DCIoU, within the scope of this study, demonstrated favorable performance on both the X-ray hazelnut and COCO-128 datasets, and has the potential to improve object detection accuracy when compared to other IoU-based metrics.

3.2. Neighborhood Relationship Algorithm Test

The Neighborhood Relationship Algorithm enhances the prediction performance of the model by identifying the most suitable bounding boxes for object detection. This method enables the selection of the most accurate and reliable predictions made by the model. Furthermore, the tensor structure created as a result of this algorithm facilitates an efficient segmentation process within the target box.
The classification of hazelnut images into non-defective, slightly-defective, and defective categories was conducted using the X-ray dataset with YOLOv7, involving object recognition and segmentation operations. In Figure 13, three different approaches are compared: defect detection using only YOLO v7, improved defect detection with the introduced DCIoU and Neighborhood Relationship Algorithm, and the stage where segmentation is applied in conjunction with the first two methods.
In the first column of Figure 13, only the YOLOv7 model is employed, which enables defect detection in specific areas, but struggles to accurately define the boundaries of some defects. This limitation arises because YOLOv7 focuses only on object recognition. The middle column of Figure 13 shows an improvement in detection accuracy through the proposed DCIoU and Neighborhood Relationship Algorithm, which results in a more precise identification of defect areas. This method outperforms the traditional YOLOv7 model and provides higher accuracy rates, particularly for the identification of minor defects. In the last column of Figure 13, it is evident that the segmentation process inherent to the algorithm developed clarifies the defective areas, showing the exact locations of the defects with greater accuracy. Through the segmentation process, more detailed information is gained about not only the presence of defects but also their size and shape.
The predicted values on the images indicate that the developed method (middle and last columns of Figure 13) achieves higher accuracy rates compared to YOLOv7. The segmentation-supported approach ensures a more reliable and sensitive defect detection process in quality control applications. The findings demonstrate that the introduced DCIoU method and the Neighborhood Relationship Algorithm show exceptional performance in both object recognition and segmentation accuracy.

4. Conclusions

Conventional techniques for determining hazelnut kernel percentage are labor intensive and prone to human error. The longevity of hazelnuts in storage, the industrial processes in which they can be used, and fair pricing are all directly impacted by an accurate assessment of hazelnut quality. In this study, the importance of sophisticated deep learning-based methods in hazelnut kernel percentage calculation procedures for defect detection and segmentation is emphasized. A system that provides a point of view for calculating hazelnut kernel percentage has been suggested. The enhanced YOLOv7 deep learning technique and X-ray imaging, one of the non-destructive food processing techniques, have been utilized in the suggested system to identify hazelnut defects. The DCIoU method, designed for bounding box-based deep learning techniques, has led to more precise and efficient detections. Additionally, the Neighborhood Relationship Algorithm has been employed to choose the most appropriate bounding box from those produced by the model, allowing for segmentation within the identified region. The application of the DCIoU method has demonstrated an improvement in the performance of the deep learning technique. A new perspective on NMS processes has been introduced through the Neighborhood Relationship Algorithm, along with a more effective filtering method, demonstrating that segmentation tasks can also be integrated into object detection. The results show that the suggested system can serve as a more efficient and reliable alternative for the calculation of the hazelnut kernel percentage. The integration of non-destructive methods with deep learning and artificial intelligence applications has significantly enhanced the effectiveness of food quality assessment processes. Considering that defect rates can vary between 1% and 15% depending on environmental and production-related factors, the proposed approach offers a reliable solution for standardizing quality assessment. As presented in Table 1, the proposed system achieved a precision of 94.44%, a recall of 88.60%, an mAP@50 of 93.73%, and an mAP@50–95 of 79.04%. These results indicate that the system can successfully detect a large proportion of potential defects. Moreover, the proposed system is highly efficient in terms of processing time. Once the sample for hazelnut kernel percentage calculation is prepared, real-time defect detection can be performed through X-ray imaging, thereby significantly accelerating the hazelnut kernel percentage estimation process.
Future studies may enable the development of fully digital systems designed for 3D modeling of hazelnut kernels, kernel weight estimation, and internal ratio measurement. These systems are expected to provide greater precision and efficiency in industrial quality control processes. The use of portable X-ray devices could significantly enhance the applicability of such systems across different production environments. Testing the developed techniques on various industrial datasets would allow evaluation of their applicability, and the model’s suitability for real-time large-scale production lines could be analyzed. Model performance may be further improved through advanced hyperparameter optimization and comparative analyses using different deep learning architectures. Moreover, the DCIoU loss function and the Neighborhood Relationship Algorithm proposed in this study, due to their accuracy-enhancing structures, hold promise for future systems. On the other hand, the requirement for X-ray imaging capabilities and high-performance computing resources remains a key limitation that may increase the initial investment cost and limit industrial-scale adoption. However, the system is expected to offset this cost over long-term usage.

Author Contributions

Conceptualization, S.M.Y., S.Ç.K. and E.G.; methodology, S.M.Y. and S.Ç.K.; software, S.M.Y.; validation, S.M.Y., S.Ç.K. and E.G.; formal analysis, S.M.Y., S.Ç.K. and E.G.; investigation, S.M.Y.; resources, S.M.Y.; data curation, S.M.Y.; writing—original draft preparation, S.M.Y.; writing—review and editing, E.G.; visualization, S.M.Y.; supervision, S.Ç.K. and E.G.; project administration, S.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The dataset used in this study was created by the researchers and has been made publicly available. It can be accessed via the following link: https://drive.google.com/drive/folders/1LdklaSNOFdhw4W8MgsiW-bgdbnRNT_lZ?usp=sharing (accessed on 22 July 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OECDOrganization for Economic Co-operation and Development
UNECEUnited Nations Economic Commission for Europe
BiTBigTransfer
OA-MILObject-Aware Multiple Instance Learning
ARAlpha-Refine
IoUIntersection over Union
GIoUGeneralized Intersection over Union
DIoUDistance Intersection over Union
CIoUComplete Intersection over Union
DCIoUDistance Center Intersection over Union
CLAHEContrast Limited Adaptive Histogram Equalization
NMSNon-Maximum Suppression

References

  1. Król, K.; Gantner, M. Morphological Traits and Chemical Composition of Hazelnut from Different Geographical Origins: A Review. Agriculture 2020, 10, 375. [Google Scholar] [CrossRef]
  2. Romero-Aroca, A.; Rovira, M.; Cristofori, V.; Silvestri, C. Hazelnut Kernel Size and Industrial Aptitude. Agriculture 2021, 11, 1115. [Google Scholar] [CrossRef]
  3. Cabo, S.; Aires, A.; Carvalho, R.; Vilela, A.; Pascual-Seva, N.; Silva, A.P.; Gonçalves, B. Kaolin, Ascophyllum nodosum and salicylic acid mitigate effects of summer stress improving hazelnut quality. J. Sci. Food Agric. 2021, 101, 459–475. [Google Scholar] [CrossRef] [PubMed]
  4. Poșta, D.S.; Radulov, I.; Cocan, I.; Berbecea, A.A.; Alexa, E.; Hotea, I.; Iordănescu, O.A.; Băla, M.; Cântar, I.C.; Rózsa, S.; et al. Hazelnuts (Corylus avellana L.) Spontaneous Flora West Part Romania: A Source of Nutrients for Locals. Agronomy 2022, 12, 214. [Google Scholar] [CrossRef]
  5. FAOSTAT. Food and Agricultural Organization of the United Nations. 2023. Available online: http://www.fao.org/faostat/en/#data/QC (accessed on 10 April 2024).
  6. Pacchiarelli, A.; Silvestri, C.; Muganu, M.; Cristofori, V. Influence of the Plant Training System on Yield and Nut Traits of European Hazelnut (Corylus avellana L.) Cultiv. Nocchione. Agronomy 2025, 15, 345. [Google Scholar] [CrossRef]
  7. Ayyildiz, E.; Yildiz, A.; Taşkın, A.; Ozkan, C. An interval valued Pythagorean fuzzy AHP integrated quality function deployment methodology for hazelnut production in Turkey. Expert Syst. Appl. 2023, 231, 120708. [Google Scholar] [CrossRef]
  8. Günay, H.F.; Uyğun, U.; Yardımcıoğlu, F. Evaluation of Fiscal Support on Hazelnut Production in Point of Efficiency and Farmer Content. Sak. Univ. J. Econ. 2020, 9, 299–332. [Google Scholar]
  9. Valeriano, T.; Fischer, K.; Ginaldi, F.; Giustarini, L.; Castello, G.; Bregaglio, S. Rotten Hazelnuts Prediction via Simulation Modeling—A Case Study on the Turkish Hazelnut Sector. Front. Plant Sci. 2022, 13, 766493. [Google Scholar] [CrossRef] [PubMed]
  10. Bak, T.; Karadeniz, T. Effects of Branch Number on Quality Traits and Yield Properties of European Hazelnut (Corylus avellana L.). Agric. 2021, 11, 437. [Google Scholar] [CrossRef]
  11. Karakaya, O.; Yaman, M.; Balta, F.; Yilmaz, M.; Balta, M.F. Assessment of genetic diversity revealed by morphological traits and ISSR markers in hazelnut germplasm (Corylus avellana L.) East. Black Sea Reg. Turkey. Genet. Resour. Crop Evol. 2023, 70, 525–537. [Google Scholar] [CrossRef]
  12. Spataro, F.; Rosso, F.; Genova, G.; Caligiani, A. Untargeted UHPLC-HRMS as a new tool for the detection of rotten defect markers in hazelnuts of different origins. Microchem. J. 2024, 197, 109743. [Google Scholar] [CrossRef]
  13. Turan, A. Determination of Some Traits on Arrival of Hazelnut at Purchasing Points: The Case of Giresun. Akad. Ziraat Derg. 2023, 12, 99–108. [Google Scholar] [CrossRef]
  14. Ferrão, A.C.; Guiné, R.P.F.; Ramalhosa, E.; Lopes, A.; Rodrigues, C.; Martins, H.; Gonçalves, R.; Correia, P.M.R. Chemical and Physical Properties of Some Hazelnut Varieties Grown in Portugal. Agronomy 2021, 11, 1476. [Google Scholar] [CrossRef]
  15. Bostan, S.Z.; Karakaya, O. Morphological, chemical, and molecular characterization of a new late-leafing and high fruit quality hazelnut (Corylus avellana L.) Genotype. Genet. Resour. Crop Evol. 2024, 71, 5113–5126. [Google Scholar] [CrossRef]
  16. İşbakan, H.; Bostan, S.Z. Relationships between plant morphological traits, nut yield and quality traits in hazelnut. Ordu Univ. J. Sci. Technol. 2020, 10, 32–45. [Google Scholar]
  17. OECD. Organisation for Economic Co-Operation and Development. 2011. Available online: https://www.oecd.org/en/publications/inshell-hazelnuts-and-hazelnut-kernels_9789264166721-en-fr.html (accessed on 10 April 2024).
  18. UNECE. United Nations Economic Commission for Europe. 2010. Available online: https://unece.org/fileadmin/DAM/trade/agr/standard/dry/dry_e/04HazelnutKernels_e.pdf (accessed on 10 April 2024).
  19. Bostan, S.Z. Nut and kernel defects in hazelnut. Akad. Ziraat Derg. 2019, 8, 157–166. [Google Scholar] [CrossRef]
  20. Gavilán-CuiCui, G.; Padilla-Contreras, D.; Manterola-Barroso, C.; Morina, F.; Meriño-Gergichevich, C. Antioxidant Performance in Hazelnut (Corylus avellana L.) Cultiv. Shell Is Subst. Influ. Seas. Locality. Agronomy 2024, 14, 1412. [Google Scholar] [CrossRef]
  21. Kan, E.; Akgün, M.; Turan, A. Effect of Brown Marmorated Stink Bug [Halyomorpha halys Stal (Hemiptera: Pentatomidae)] on Physical Traits of Hazelnuts. Black Sea J. Sci. 2024, 14, 1654–1664. [Google Scholar] [CrossRef]
  22. Silvestri, C.; Bacchetta, L.; Bellincontro, A.; Cristofori, V. Advances in cultivar choice, hazelnut orchard management, and nut storage to enhance product quality and safety: An overview. J. Sci. Food Agric. 2020, 101, 27–43. [Google Scholar] [CrossRef] [PubMed]
  23. Pannico, A.; Schouten, R.E.; Basile, B.; Romano, R.; Woltering, E.J.; Cirillo, C. Non-destructive detection of flawed hazelnut kernels and lipid oxidation assessment using NIR spectroscopy. J. Food Eng. 2015, 160, 42–48. [Google Scholar] [CrossRef]
  24. Mahanti, N.K.; Pandiselvam, R.; Kothakota, A.; Ishwarya, S.P.; Chakraborty, S.K.; Kumar, M.; Cozzolino, D. Emerging non-destructive imaging techniques for fruit damage detection: Image processing and analysis. Trends Food Sci. Technol. 2022, 120, 418–438. [Google Scholar] [CrossRef]
  25. Abasi, S.; Minaei, S.; Jamshidi, B.; Fathi, D. Dedicated non-destructive devices for food quality measurement: A review. Trends Food Sci. Technol. 2018, 78, 197–205. [Google Scholar] [CrossRef]
  26. He, Y.; Xiao, Q.; Bai, X.; Zhou, L.; Liu, F.; Zhang, C. A Recent progress of nondestructive techniques for fruits damage inspection: A review. Crit. Rev. Food Sci. Nutr. 2021, 62, 5476–5494. [Google Scholar] [CrossRef] [PubMed]
  27. Gupta, M.; Khan, M.A.; Butola, R.; Singari, R.M. Advances in applications of Non-Destructive Testing (NDT): A review. Adv. Mater. Process. Technol. 2021, 8, 2286–2307. [Google Scholar] [CrossRef]
  28. Li, L.; Jia, X.; Fan, K. Recent advance in nondestructive imaging technology for detecting quality of fruits and vegetables: A review. Crit. Rev. Food Sci. Nutr. 2024, 1–19. [Google Scholar] [CrossRef] [PubMed]
  29. Ropodi, A.I.; Panagou, E.Z.; Nychas, G.-J.E. Data mining derived from food analyses using non-invasive/non-destructive analytical techniques; determination of food authenticity, quality & safety in tandem with computer science disciplines. Trends Food Sci. Technol. 2016, 50, 11–25. [Google Scholar] [CrossRef]
  30. Torres-Cobos, B.; Tres, A.; Vichi, S.; Guardiola, F.; Rovira, M.; Romero, A.; Baeten, V.; Fernández-Pierna, J.A. Comparative analysis of spectroscopic methods for rapid authentication of hazelnut cultivar and origin. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2025, 326, 125367. [Google Scholar] [CrossRef] [PubMed]
  31. Shakiba, N.; Gerdes, A.; Holz, N.; Wenck, S.; Bachmann, R.; Schneider, T.; Seifert, S.; Fischer, M.; Hackl, T. Determination of the geographical origin of hazelnuts (Corylus avellana L.) Near-Infrared Spectrosc. (NIR) A Low-Level Fusion Nucl. Magn. Reson. (NMR). Microchem. J. 2022, 174, 107066. [Google Scholar] [CrossRef]
  32. Xue, Q.; Miao, P.; Miao, K.; Yu, Y.; Li, Z. X-ray-based machine vision technique for detection of internal defects of sterculia seeds. J. Food Sci. 2022, 87, 3386–3395. [Google Scholar] [CrossRef] [PubMed]
  33. Zehi, Z.B.; Afshari, A.; Noori, S.M.A.; Jannat, B.; Hashemi, M. The Effects of X-Ray Irradiation on Safety and Nutritional Value of Food: A Systematic Review Article. Curr. Pharm. Biotechnol. 2020, 21, 919–926. [Google Scholar] [CrossRef] [PubMed]
  34. Kayakuş, M.; Kabas, Ö.; Ünal, İ.; Paçacı, S.; Dinca, M.N. Non-destructive prediction of hazelnut and hazelnut kernel deformation energy using machine learning techniques. Int. J. Food Prop. 2024, 27, 326–340. [Google Scholar] [CrossRef]
  35. El-Mesery, H.S.; Mao, H.; Abomohra, A.E.-F. Applications of Non-destructive Technologies for Agricultural and Food Products Quality Inspection. Sensors 2019, 19, 846. [Google Scholar] [CrossRef] [PubMed]
  36. Adak, M.F. Identification of Plant Species by Deep Learning and Providing as A Mobile Application. Sak. Univ. J. Comput. Inf. Sci. 2020, 3, 231–238. [Google Scholar] [CrossRef]
  37. Zhu, L.; Spachos, P.; Pensini, E.; Plataniotis, K.N. Deep learning and machine vision for food processing: A survey. Curr. Res. Food Sci. 2021, 4, 233–249. [Google Scholar] [CrossRef] [PubMed]
  38. Güney, E.; Bayılmış, C.; Çakar, S.; Erol, E.; Atmaca, Ö. Autonomous control of shore robotic charging systems based on computer vision. Expert Syst. Appl. 2024, 238, 122116. [Google Scholar] [CrossRef]
  39. Selamet, F.; Cakar, S.; Kotan, M. Automatic Detection and Classification of Defective Areas on Metal Parts by Using Adaptive Fusion of Faster R-CNN and Shape From Shading. IEEE Access 2022, 10, 126030–126038. [Google Scholar] [CrossRef]
  40. Cerezci, F.; Çakar, S.; Oz, M.A.; Oz, C.; Tasci, T.; Hizal, S.; Altay, C. Online metallic surface defect detection using deep learning. Emerg. Mater. Res. 2020, 4, 1266–1273. [Google Scholar] [CrossRef]
  41. Oztel, I.; Yolcu Oztel, G.; Sahin, V.H. Deep Learning-Based Skin Diseases Classification using Smartphones. Adv. Intell. Syst. 2023, 5, 2300211. [Google Scholar] [CrossRef]
  42. Taner, A.; Öztekin, Y.B.; Duran, H. Performance Analysis of Deep Learning CNN Models for Variety Classification in Hazelnut. Sustainability 2021, 13, 6527. [Google Scholar] [CrossRef]
  43. Gencturk, B.; Arsoy, S.; Taspinar, Y.S.; Cinar, İ.; Kursun, R.; Yasin, E.Y.; Koklu, M. Detection of hazelnut varieties and development of mobile application with CNN data fusion feature reduction-based models. Eur. Food Res. Technol. 2024, 250, 97–110. [Google Scholar] [CrossRef]
  44. Ünal, Z.; Aktaş, H. Classification of hazelnut kernels with deep learning. Postharvest Biol. Technol. 2023, 197, 112225. [Google Scholar] [CrossRef]
  45. Dönmez, E.; Kılıçarslan, S.; Diker, A. Classification of hazelnut varieties based on bigtransfer deep learning model. Eur. Food Res. Technol. 2024, 250, 1433–1442. [Google Scholar] [CrossRef]
  46. Keles, O.; Taner, A. Classification of hazelnut varieties by using artificial neural network and discriminant analysis. Span. J. Agric. Res. 2021, 19, e0211. [Google Scholar] [CrossRef]
  47. Aydin, S.; Aldara, D. Microservices-based databank for Turkish hazelnut cultivars using IoT and semantic web Technologies. Concurr. Comput. Pract. Exp. 2024, 36, e8062. [Google Scholar] [CrossRef]
  48. Shojaeian, A.; Bagherpour, H.; Bagherpour, R.; Parian, J.A.; Fatehi, F.; Taghinezhad, E. The Potential Application of Innovative Methods in Neural Networks for Surface Crack Recognition of Unshelled Hazelnut. J. Food Process. Preserv. 2023, 2023, 2177724. [Google Scholar] [CrossRef]
  49. Dönmez, E.; Ünal, Y.; Kayhan, H. Bacterial Disease Detection of Cherry Plant Using Deep Features. Sak. Univ. J. Comput. Inf. Sci. 2024, 7, 1–10. [Google Scholar] [CrossRef]
  50. Kayaalp, K.; Metlek, S. Classification of Robust and Rotten Apples by Deep Learning Algorithm. Sak. Univ. J. Comput. Inf. Sci. 2020, 3, 112–120. [Google Scholar] [CrossRef]
  51. Sasso, D.; Lodato, F.; Sabatini, A.; Pennazza, G.; Vollero, L.; Santonico, M.; Merone, M. Hazelnut mapping detection system using optical and radar remote sensing: Benchmarking machine learning algorithms. Artif. Intell. Agric. 2024, 12, 97–108. [Google Scholar] [CrossRef]
  52. Maier, G.; Shevchyk, A.; Flitter, M.; Gruna, R.; Längle, T.; Hanebeck, U.D.; Beyerer, J. Motion-based visual inspection of optically indiscernible defects on the example of hazelnuts. Comput. Electron. Agric. 2021, 185, 106147. [Google Scholar] [CrossRef]
  53. Khosa, I.; Pasero, E. Feature extraction in X-ray images for hazelnuts classification. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 4 September 2014. [Google Scholar] [CrossRef]
  54. Solak, S.; Altınışık, U. Detection and classification of hazelnut fruit by using image processing techniques and clustering methods. Sak. Univ. J. Sci. 2018, 22, 56–65. [Google Scholar] [CrossRef]
  55. Kaur, R.; Singh, S. A comprehensive review of object detection with deep learning. Digit. Signal Process. 2023, 132, 103812. [Google Scholar] [CrossRef]
  56. Kaur, J.; Singh, W. Tools, techniques, datasets and application areas for object detection in an image: A review. Multimed. Tools Appl. 2022, 81, 38297–38351. [Google Scholar] [CrossRef] [PubMed]
  57. Qian, X.; Lin, S.; Cheng, G.; Yao, X.; Ren, H.; Wang, W. Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion. Remote Sens. 2020, 12, 143. [Google Scholar] [CrossRef]
  58. Zhang, X.; Li, H.; Meng, F.; Song, Z.; Xu, L. Segmenting Beyond the Bounding Box for Instance Segmentation. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 704–714. [Google Scholar] [CrossRef]
  59. Shen, Y.; Zhang, F.; Liu, D.; Pu, W.; Zhang, Q. Manhattan-distance IOU loss for fast and accurate bounding box regression and object detection. Neurocomputing 2022, 500, 99–114. [Google Scholar] [CrossRef]
  60. Ravi, N.; Naqvi, S.; El-Sharkawy, M. BIoU: An Improved Bounding Box Regression for Object Detection. J. Low Power Electron. Appl. 2022, 12, 51. [Google Scholar] [CrossRef]
  61. Yuan, D.; Shu, X.; Fan, N.; Chang, X.; Liu, Q.; He, Z. Accurate bounding-box regression with distance-IoU loss for visual tracking. J. Vis. Commun. Image Represent. 2022, 83, 103428. [Google Scholar] [CrossRef]
  62. Liu, C.; Wang, K.; Lu, H.; Cao, Z.; Zhang, Z. Robust Object Detection with Inaccurate Bounding Boxes. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; ECCV Lecture Notes in Computer Science 2022. Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  63. Vo, X.-T.; Jo, K.-H. Accurate Bounding Box Prediction for Single-Shot Object Detection. IEEE Trans. Ind. Inform. 2022, 8, 5961–5971. [Google Scholar] [CrossRef]
  64. Zhang, R.; Di, Y.; Lou, Z.; Manhardt, F.; Tombari, F.; Ji, X. RBP-Pose: Residual Bounding Box Projection for Category-Level Pose Estimation. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; ECCV Lecture Notes in Computer Science 2022. Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  65. Yan, B.; Zhang, X.; Wang, D.; Lu, H.; Yang, X. Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  66. Yu, J.; Jiang, Y.; Wang, Z.; Cao, Z.; Huang, T. UnitBox: An Advanced Object Detection Network. In Proceedings of the MM ’16: ACM Multimedia Conference, Amsterdam, The Netherlands, 15–19 October 2016. [Google Scholar]
  67. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
  68. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–8 February 2020; Volume 34, pp. 12993–13000. [Google Scholar]
  69. Ultralytics. COCO-128 Data Set. Available online: https://docs.ultralytics.com/tr/datasets/detect/coco128/ (accessed on 22 July 2025).
  70. Wong, K.Y. Official YOLOv7. 2023. Available online: https://github.com/WongKinYiu/yolov7 (accessed on 22 July 2025).
  71. Oztel, I.; Yolcu Oztel, G.; Akgun, D.A. hybrid LBP-DCNN based feature extraction method in YOLO: An application for masked face and social distance detection. Multimed. Tools Appl. 2023, 82, 1565–1583. [Google Scholar] [CrossRef] [PubMed]
  72. Sun, D.; Yang, Y.; Li, M.; Yang, J.; Meng, B.; Bai, R. A Scale Balanced Loss for Bounding Box Regression. IEEE Access 2020, 8, 108438–108448. [Google Scholar] [CrossRef]
  73. Sun, Y.; Wang, J.; Wang, H.; Zhang, S.; You, Y.; Yu, Z. Fused-IoU Loss: Efficient Learning for Accurate Bounding Box Regression. IEEE Access 2024, 12, 37363–37377. [Google Scholar] [CrossRef]
  74. Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
  75. Du, S.; Zhang, B.; Zhang, P. Scale-Sensitive IOU Loss: An Improved Regression Loss Function in Remote Sensing Object Detection. IEEE Access 2021, 9, 141258–141272. [Google Scholar] [CrossRef]
  76. Wang, Q.; Cheng, J. LCornerIoU: An Improved IoU-based Loss Function for Accurate Bounding Box Regression. In Proceedings of the 2021 International Conference on Intelligent Computing, Automation and Systems (ICICAS), Chongqing, China, 29–31 December 2021. [Google Scholar]
  77. Su, K.; Cao, L.; Zhao, B.; Li, N.; Wu, D.; Han, X. N-IoU: Better IoU-based bounding box regression loss for object detection. Neural Comput. Appl. 2024, 36, 3049–3063. [Google Scholar] [CrossRef]
Figure 1. Unprocessed and processed hazelnut kernels.
Figure 1. Unprocessed and processed hazelnut kernels.
Processes 13 02414 g001
Figure 2. Unprocessed hazelnut kernel/processed hazelnut kernel/X-ray image of hazelnut. Red squares represent defective areas in the hazelnut kernels.
Figure 2. Unprocessed hazelnut kernel/processed hazelnut kernel/X-ray image of hazelnut. Red squares represent defective areas in the hazelnut kernels.
Processes 13 02414 g002
Figure 3. Hazelnut quality assessment system.
Figure 3. Hazelnut quality assessment system.
Processes 13 02414 g003
Figure 4. Hazelnut selection device.
Figure 4. Hazelnut selection device.
Processes 13 02414 g004
Figure 5. Hazelnut X-ray images.
Figure 5. Hazelnut X-ray images.
Processes 13 02414 g005
Figure 6. Kernel X-ray image Otsu filter CLAHE filter Gaussian Blur filter Anisotropic Diffusion filter.
Figure 6. Kernel X-ray image Otsu filter CLAHE filter Gaussian Blur filter Anisotropic Diffusion filter.
Processes 13 02414 g006
Figure 7. Geometrical representation of the ground-truth box b g , prediction box b p , and the enclosing box b e . The centers of each box are denoted as b c g , b c p , and b c e , respectively. The diagonal distances of the boxes are shown as b d g , b d p , and b d e . The width and height of the ground-truth box are represented by g w and g h , while those of the predicted box are denoted by p w and p h .
Figure 7. Geometrical representation of the ground-truth box b g , prediction box b p , and the enclosing box b e . The centers of each box are denoted as b c g , b c p , and b c e , respectively. The diagonal distances of the boxes are shown as b d g , b d p , and b d e . The width and height of the ground-truth box are represented by g w and g h , while those of the predicted box are denoted by p w and p h .
Processes 13 02414 g007
Figure 8. Comparative illustrations showing the behavior of IoU-based loss functions (IoU, GIoU, DIoU, CIoU, and DCIoU) in different spatial scenarios. The ground truth bounding box is shown in red, the predicted bounding box in green, and the minimum enclosing rectangle covering both boxes is depicted with a dashed black line. The centers of the boxes are represented by black dots. (a) The ground truth and predicted boxes are square-shaped with overlapping centers, but differ in size. (b) The ground truth and predicted boxes have different sizes and different centers. (c) The box centers overlap, heights are equal, but widths differ. (d) The box centers overlap, and the aspect ratios are swapped (one is oriented more horizontally, the other more vertically).
Figure 8. Comparative illustrations showing the behavior of IoU-based loss functions (IoU, GIoU, DIoU, CIoU, and DCIoU) in different spatial scenarios. The ground truth bounding box is shown in red, the predicted bounding box in green, and the minimum enclosing rectangle covering both boxes is depicted with a dashed black line. The centers of the boxes are represented by black dots. (a) The ground truth and predicted boxes are square-shaped with overlapping centers, but differ in size. (b) The ground truth and predicted boxes have different sizes and different centers. (c) The box centers overlap, heights are equal, but widths differ. (d) The box centers overlap, and the aspect ratios are swapped (one is oriented more horizontally, the other more vertically).
Processes 13 02414 g008
Figure 9. Neighborhood Relationship Algorithm parameters: (a) Movement directions. (b) Clustering image matrix. (c) Validation matrix. (d) Output tensor data.
Figure 9. Neighborhood Relationship Algorithm parameters: (a) Movement directions. (b) Clustering image matrix. (c) Validation matrix. (d) Output tensor data.
Processes 13 02414 g009
Figure 10. The process of creating a bounding box using the Neighborhood Relationship Algorithm: (a) Input image. (b) The image resulting from the preprocessing. (c) The boxes generated by the Neighborhood Relationship Algorithm without applying the threshold value. (d) The boxes generated by the Neighborhood Relationship Algorithm by applying the threshold value.
Figure 10. The process of creating a bounding box using the Neighborhood Relationship Algorithm: (a) Input image. (b) The image resulting from the preprocessing. (c) The boxes generated by the Neighborhood Relationship Algorithm without applying the threshold value. (d) The boxes generated by the Neighborhood Relationship Algorithm by applying the threshold value.
Processes 13 02414 g010
Figure 11. (a) Bounding box representation. (b) Regression loss curves.
Figure 11. (a) Bounding box representation. (b) Regression loss curves.
Processes 13 02414 g011
Figure 12. Bounding box regression of IoU, GIoU, DIoU, CIoU, and DCIoU. Red, blue, and green boxes are the target box, the initial prediction box, and the regression value of each epoch, respectively.
Figure 12. Bounding box regression of IoU, GIoU, DIoU, CIoU, and DCIoU. Red, blue, and green boxes are the target box, the initial prediction box, and the regression value of each epoch, respectively.
Processes 13 02414 g012
Figure 13. Defect detection and segmentation results on different hazelnut kernel X-ray images. Each row represents a different hazelnut kernel sample, and from left to right: (a,d,g,j,m) show defect detection results using only YOLOv7, (b,e,h,k,n) show detection results using YOLOv7 + DCIoU + Neighborhood Relationship Algorithm, (c,f,i,l,o) present segmentation results using YOLOv7 + DCIoU + Neighborhood Relationship Algorithm. Red boxes indicate defective regions, yellow boxes represent slightly defective regions, and green boxes indicate non-defective hazelnut regions.
Figure 13. Defect detection and segmentation results on different hazelnut kernel X-ray images. Each row represents a different hazelnut kernel sample, and from left to right: (a,d,g,j,m) show defect detection results using only YOLOv7, (b,e,h,k,n) show detection results using YOLOv7 + DCIoU + Neighborhood Relationship Algorithm, (c,f,i,l,o) present segmentation results using YOLOv7 + DCIoU + Neighborhood Relationship Algorithm. Red boxes indicate defective regions, yellow boxes represent slightly defective regions, and green boxes indicate non-defective hazelnut regions.
Processes 13 02414 g013
Table 1. Training results of IoU metrics on X-ray hazelnut and COCO-128 datasets using YOLOv7.
Table 1. Training results of IoU metrics on X-ray hazelnut and COCO-128 datasets using YOLOv7.
X-Ray HazelnutPrecisionRecallmAP@50mAP@50-95COCO-128PrecisionRecallmAP@50mAP@50-95
IoU0.88720.87460.91750.7815IoU0.89240.86940.94280.7551
GIoU0.91350.86650.92130.7886GIoU0.89880.87140.94750.7576
DIoU0.92480.87650.92530.7893DIoU0.92130.87250.95120.7610
CIoU0.93160.87900.92680.7904CIoU0.92400.87800.95730.7742
DCIoU0.94480.88600.93730.7901DCIoU0.93480.88160.96320.7860
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yılmaz, S.M.; Kaman, S.Ç.; Güler, E. Hazelnut Kernel Percentage Calculation System with DCIoU and Neighborhood Relationship Algorithm. Processes 2025, 13, 2414. https://doi.org/10.3390/pr13082414

AMA Style

Yılmaz SM, Kaman SÇ, Güler E. Hazelnut Kernel Percentage Calculation System with DCIoU and Neighborhood Relationship Algorithm. Processes. 2025; 13(8):2414. https://doi.org/10.3390/pr13082414

Chicago/Turabian Style

Yılmaz, Sultan Murat, Serap Çakar Kaman, and Erkan Güler. 2025. "Hazelnut Kernel Percentage Calculation System with DCIoU and Neighborhood Relationship Algorithm" Processes 13, no. 8: 2414. https://doi.org/10.3390/pr13082414

APA Style

Yılmaz, S. M., Kaman, S. Ç., & Güler, E. (2025). Hazelnut Kernel Percentage Calculation System with DCIoU and Neighborhood Relationship Algorithm. Processes, 13(8), 2414. https://doi.org/10.3390/pr13082414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop