Next Article in Journal
Construction of the Stand and Experimental Studies of the Iontophoresis Process
Previous Article in Journal
Performance Comparison of Large Language Models on Brazil’s Medical Revalidation Exam for Foreign-Trained Graduates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Food Waste Detection in Canteen Plates Using YOLOv11

ADiT-LAB, Instituto Politécnico de Viana do Castelo, Rua Escola Industrial e Comercial Nun’Álvares, 4900-347 Viana do Castelo, Portugal
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7137; https://doi.org/10.3390/app15137137
Submission received: 3 May 2025 / Revised: 11 June 2025 / Accepted: 17 June 2025 / Published: 25 June 2025
(This article belongs to the Special Issue Artificial Intelligence and Numerical Simulation in Food Engineering)

Abstract

This work presents a Computer Vision (CV) platform for Food Waste (FW) detection in canteen plates exploring a research gap in automated FW detection using CV models. A machine learning methodology was followed, starting with the creation of a custom dataset of canteen plates images before and after lunch or dinner, and data augmentation techniques were applied to enhance the model’s robustness. Subsequently, a CV model was developed using YOLOv11 to classify the percentage of FW on a plate, distinguishing between edible food items and non-edible discarded material. To evaluate the performance of the model, we used a real dataset as well as three benchmarking datasets with food plates, in which it could be detected waste. For the real dataset, the system achieved a mean average precision (mAP) of 0.343, a precision of 0.62, and a recall of 0.322 on the test set as well as demonstrating high accuracy in classifying waste considering the traditional evaluation metrics on the benchmarking datasets. Given these promising results and the provision of open-source code on a GitHub repository, the platform can be readily utilized by the research community and educational institutions to monitor FW in student meals and proactively implement reduction strategies.

1. Introduction

Food Waste (FW) constitutes one of humanity’s most significant unsolved problems, impacting both developed and developing countries. Most of the FW in developed countries is generated during the consumption stage, at which it has already reached the final consumer, and most of the FW in developing countries is generated at the production and post-harvest stages, due to the lack of modern technology [1].
On the other hand, following the orientation of sustainability and health, schools have a very important role at their core in educating people in being responsible future consumers. FW can also be a major problem in schools by the fact that it can mean that the students (especially children) could be losing important nutritional benefits from the wasted food [2]. These nutritional benefits are especially concerning when we think about the current major problems regarding obesity and overweightness, whose rates are rapidly increasing in every developed country [3]. Schools can also benefit from reducing FW because it will consequently reduce the costs of making that food and serving it to their consumers [4]. However, accurately and efficiently measuring FW in a school canteen can be challenging.
In recent years, there has been a significant rise in Artificial Intelligence (AI), particularly in Computer Vision (CV) through open-source frameworks for object detection. YOLO [5] is one such framework and has been applied and explored in numerous food detection studies. However, a review of recent research papers and CV implementations reveals a notable gap concerning the practical applicability of CV (e.g., through YOLO [5]) in FW detection in canteen plates.
In this context, the objective is to create and use a CV approach, namely using YOLO version 11 for Food Waste detection in canteen plates. A machine learning methodology was followed, starting with the creation of a custom dataset of canteen plates images before and after lunch or dinner. Data Augmentation (DA) techniques were applied to enhance the model’s robustness. Then, a model was created in order to classify the percentage of Food Waste in a plate based on the categories of what is food and garbage. To evaluate the performance of the model, we used a real dataset as well as three benchmarking datasets with food plates in which waste could be detected. The model was evaluated with a real dataset and three benchmark datasets, and promising results were achieved in the calculation of Food Waste as well as the food items in canteen plates.
The main innovation of the work does not lie in proposing new CV models or techniques, but rather in the practical and contextualized application of already-established models (such as YOLOv11) for the detection and quantification of food waste in canteen plates. As a contribution, to foster further research in this context (e.g., school canteens) the dataset, code, benchmark, and pretrained models are available in the GitHub (the code is available and fully open-source at [https://github.com/Xurape/Food-Waste-Detection-using-YOLOv11, accessed on 16 June 2025] open-source repository.
The paper is organized as follows: in Section 2, the related work is presented, starting with the introduction and implementations of the YOLO Framework in food detection, detection of food in plates, and FW detection. Section 3 presents the methods and methodology of the work, and Section 4 the experiments and results. Finally, the conclusions and future work are presented in Section 5, and the references that supported the work are in the last section.

2. Related Work

2.1. YOLO: You Only Look Once

YOLO was first introduced to the CV community by Joseph Redmon through an article published in 2015, where it reassessed the object detection paradigm [6]. It has been evolving at an extremely high pace [7], with YOLO currently in its 11th version. During the last years, various versions of YOLO have been presented [7,8,9], adding objectness score to bounding box predictions and making predictions at three distinct levels of detail to improve small object detection performance (YOLOv3) [8]; improving the accuracy of the detection without impacting the inference time (YOLOv4) [10]; changing the implementation approach (YOLOv5 using PyTorch 1.6.0) [11]; using a neural network framework (YOLOv6) similar to Google TensorFlow [11]; improving fast and real-time accuracy object detection (YOLOv7) [12]; directly predicting the center of an object instead of the offset from a known anchor box (YOLOv8) [11], including Programmable Gradient Information (PGI) and Generalized Efficient Layer Aggregation Network (GELAN), both of which significantly improve feature extraction, gradient flow, and network efficiency (YOLOv9) [13]; improving the use of Deep Convolutional Neural Networks (DCNNs); and the new Multidimensional Feature Fusion (MFD) engine (YOLOv10) [14,15].
Figure 1 presents the model key architecture in YOLOv11 [16], which incorporates Spatial Pyramid Pooling-Fast (SPPF), Cross Stage Partial with Spatial Attention (C2PSA), and Cross Stage Partial with kernel size 2 (C3k2) modules as core components. These elements collectively enhance both the efficiency and accuracy of the model.
YOLOv11 [17] is the latest version of YOLO, which provides a more capable and adaptable model that pushes the boundaries of CV. The current model supports CV tasks like posture estimation and improved instance segmentation, representing a significant advancement in real-time object detection technology [16].
In this project, which aims to identify FW on plates in a school canteens, we chose to use YOLOv11. This version was selected due to its contemporaneity at the time of development, its comprehensive documentation directly applicable to the project’s requirements, and superior benchmarking results compared to other versions, as detailed in Section 3.2.3. YOLOv11 proved to be essential due to its significant improvements in detecting smaller objects, which was crucial for identifying small food portions or leftovers on the plates and segmenting them using the instance segmentation technique. Furthermore, its ability to adapt to varying lighting conditions and dynamic environments, such as those found in school canteens, ensured a more effective and accurate solution. The extensive documentation also played a key role, as it facilitated the necessary implementation and adjustments, leading to successful outcomes in FW identification and providing efficient, high-quality results.

2.2. Implementations in the Food Detection Context

In recent years, food detection has been studied and applied with some AI techniques, particularly Deep Learning (DL) [18]. The number of implementations of CV using YOLO has significantly increased, with several papers being presented in the categories of food detection, food detection on plates, and FW detection. Table 1 presents an analysis of the main works that use CV techniques, highlighting their contributions to the advancement of technology in their respective fields. In parallel, some works have been presented in the area of food (and object) detection using segmentation [19].
As presented in Table 1, in recent years several works have been developed applying DL (e.g., PyTorch >= 2.0.0 or TensorFlow >= 2.11.0) and/or CV (e.g., YOLO), mainly in the areas of food detection and food detection in plates, and more recently in FW detection. However, these FW detection applications have not been applied to FW in plates, as explored in this work. The following sections provide an analysis of each of the works listed in Table 1.

2.2.1. Food Detection

Food detection has been an important field of research, especially for applications in dietary monitoring and quality control systems. Studies in this category focus on accurately recognizing foods in images, which can be challenging due to the wide variation in food appearance, even within the same category.
Martine et al. (2016) [20], proposed an innovative architecture, called WISeR (Wide-Slice Residual Networks for Food Recognition), which uses residual networks and slice convolution layers to address food structure, such as the layered appearance of some dishes. This method has significantly improved the accuracy of food recognition, outperforming other approaches.
Goswami et al. (2017) [21] applied Deep Neural Networks (DNN) to classify food dishes. The study showed that DNN outperformed traditional machine learning methods, offering better performance in identifying food dishes, taking into account the unique visual characteristics of each dish, such as shape, color and texture.
Ciocca et al. (2020) [22] focused on detecting the state of food (such as raw, processed, cut, or mixed food). The use of features extracted by DNN, combined with Support Vector Machines (SVM), showed that the detection of food states, in addition to food categories, is a feasible and efficient task, outperforming manually made features.

2.2.2. Food in Plates Detection

Detecting food on plates is a crucial area for dietary monitoring, portion control and even reducing FW. The aim here is to identify not only the types of food but also their arrangement on a plate, which can be complicated by overlapping foods and variations in presentation mode.
Ciocca et al. (2015) [23] developed a system for estimating leftover food on plates. Using food detection and analysis of remaining portions, the system was able to help estimate the amount of food left on the plate, providing useful information for users in dietary monitoring applications. The research demonstrated that accurate detection of food on plates is crucial for estimating calories and portions.
Ciocca et al. (2016) [24] introduced a new dataset containing images of canteen trays, with multiple foods arranged in various ways. These studies used convolutional neural networks to identify foods on the trays and classify the foods. The results showed good accuracy in identifying food and trays, with a focus on dynamic environments such as canteens.
Pouladzadeh et al. (2017) [25] proposed a mobile food recognition system capable of identifying multiple foods in a meal (such as a plate with several foods). The solution uses DL to extract features from food images and identify the food in question, allowing users to monitor their food intake and estimate their calories accurately.
These works reinforce the importance of having an efficient recognition system to detect food on plates, whether for dietary monitoring or portion control purposes, with promising results in terms of accuracy and applicability in real-world environments.

2.2.3. Food Waste Detection

FW detection is an emerging area with great potential to reduce the amount of FW, especially in industrial and food production environments. The idea is to identify damaged or leftover food so that it can be properly disposed of or reused.
Sousa et al. (2019) [26] proposed a DL approach for detecting FW and automatically sorting waste from food trays. Using the aster Region Based Convolutional Neural Networks (R-CNN), the results showed a significant improvement in the accuracy of FW detection and sorting, which can be applied in industrial processes to optimize recycling and reduce waste.
Dhelia et al. (2024) [28] used YOLOv8 to detect damaged food in the food industry. The research showed how the integration of AI, specifically YOLOv8, can help quickly identify damaged or unfit-to-eat food, which can optimize quality control processes and reduce waste on production lines.
Fan et al. (2023) [27] focused on FW detection using DA techniques and YOLOv4. The innovation of the study was the use of Generative Adversarial Networks (GAN) to generate FW images, which helped to improve the performance of the YOLO model to detect waste more efficiently and accurately.
These studies highlight the growing use of advanced detection technologies, such as convolutional neural networks and YOLO, to improve FW detection, not only in industrial environments but also in waste management systems, with the aim of optimizing processes and reducing environmental impact.

3. Methods and Methodology

3.1. Context

FW is a significant problem, especially in schools, because it can negatively impact students’ nutrition. To address this issue, we are motivated to create a platform that uses CV, using YOLO to detect FW on canteen plates. This platform aims to help canteens, especially school canteens, reduce FW and provide information on the lost nutritional value, allowing them to make informed decisions about menu planning and student education. By focusing on the detection of FW on canteen plates, we provide a foundation for canteens to promote better food and nutritional practices.
In this work, we consider the following items and objects as garbage: forks, knives, spoons, cups, chips, bread, board, and garbage (generic class that contains napkins and bones, for example). To determine the FW on plates, we first consider the area of the plate and detect the area of each individual item. Then, for each item, we detect the category of the item (garbage, food, or waste). Then, we analyze its area in pixels and from a calculation Formula (6), which is described in Section 3.3.

3.2. Methodology

In this work we follow a typical machine learning methodology as illustrated in Figure 2 and in the next sections we will describe the process for each component of the flow.

3.2.1. Data Capturing and Dataset

To make this platform, we started by creating a dataset [29] in the Roboflow platform [30], beginning with 175 different images of plates containing a wide variety of foods (as shown in Table 2).
The data collection was carried out using photographs taken with a standard smartphone (Samsung Galaxy S21, manufactured by Samsung Electronics, headquartered in Suwon, South Korea), with images that have a resolution of 2400 × 1080 pixels and are stored in a specific directory within the platform.
Given the sufficient quality of the images, no additional pre-processing or enhancement was performed, ensuring the dataset reflects realistic conditions.
Figure 3 illustrates some examples of the dataset used in this work. These are real images captured in a school canteen environment and were structured to include specific scenarios, for instance, plates at the end of a meal as seen in Figure 3a,b, plates mid-way through a meal in Figure 3c, and plates at the beginning of a meal in Figure 3d,e.
There are 175 base images and a total of 860 images including image augmentation, such as zooming, cropping, rotating, and flipping. It is worth mentioning that the decision to not generate many images with magnification was taken into consideration as this might cause the model to lose accuracy.
For each image in the dataset, a process called “Labeling” was made using Roboflow’s annotation system [31]. This process allows the dataset’s editors to say that an area represented in an image is either food or garbage. For example, consider a plate with 25% rice, 25% egg, and 50% beef. We select an area containing rice grains and assign it the class “rice”. The area can be selected with a tool called “Smart Polygon”, helping us select the correct area using AI and perfecting it by adding more or less pixels.
During model training, these classes inform the system that the specific selected area with that shape, color, and texture represents “rice”, which is categorized as food. Everything not intended to be detected as FW is labeled as “garbage”.

3.2.2. Generating Data Augmentation

DA is somewhat similar to human imagination and dreaming. Humans imagine different scenarios based on experience and dream about various possible events, both real and hypothetical. Imagination helps us better understand our world, and a similar principle applies to AI. DA methods, such as generative adversarial networks [32] and neural style transfer [33], can ‘imagine’ alterations to images, enabling the model to develop a better understanding [34]. In this dataset, we used simple DA techniques such as the following:
  • Flipping (Horizontal);
  • Rotating Clockwise, Counter-Clockwise, Upside Down.
An example of the rotation augmentation can be seen in Figure 4, where Figure 4a represents the pre-processed image, Figure 4b is the clockwise rotation, Figure 4c is the counter-clockwise rotation, and Figure 4d shows the upside-down rotation.

3.2.3. YOLO Benchmarking

We consider benchmarking as a fundamental practice for evaluating and comparing the performance of different models [35], algorithms, or versions of a system. In object detection tasks, such as identifying FW on canteen plates, selecting the appropriate model can have a significant impact on the accuracy, processing time, and efficiency of the system. The goal of this benchmarking is to provide a fair and detailed comparison between different versions of the model, allowing for the identification of the most suitable solution for the given problem.
In order to select the version of YOLO that best suit the project, benchmarking was performed between three versions of the YOLO model, one of the most widely used models for object detection tasks. The versions analyzed were YOLOv5, YOLOv8, and YOLOv11, each with specific characteristics relative to the others. The metrics used for this comparison were the following:
  • Precision: Quantifies how many of the positive predictions made by the model are correct, calculated as true positives/(true positives + false positives) [36]. It indicates the model’s ability to avoid false alarms.
  • Recall: Measures how many real positive objects the model was able to detect, calculated as true positives/(true positives + false negatives) [36]. It indicates the model’s ability to find all relevant instances.
  • mAP50 (mean average precision at 50% Intersection over Union (IoU)): Represents the average precision of the model for detections with at least 50% overlap (Intersection over Union, IoU) between the predicted and real bounding boxes [37]. This is a common metric for object detection performance, particularly for general detection accuracy.
  • mAP50-95 (mean average precision across multiple IoU thresholds): This metric provides a more rigorous evaluation by averaging the precision of the model at several IoU thresholds, ranging from 50% to 95% (e.g., at 0.05 increments). It defines the model’s performance across various levels of localization accuracy.
These metrics can be calculated using the following Equations (1)–(5) [38].
Precision = T P T P + F P
Recall = T P T P + F N
mAP = 1 N i = 1 N A P i
mAP 50 = 1 N i = 1 N A P i 50
mAP 50 - 95 = 1 N i = 1 N 1 10 j = 1 10 A P i 50 + j 5
The YOLOv5 version was evaluated in our dataset [29], and the results obtained indicate satisfactory performance, especially in the precision metric 0.72 and recall 0.06. The loss graph shows a steady reduction throughout the training, and the model managed to stabilize at a reasonable loss value. The mAP metrics indicate that version 5 obtained results with a mAP50 of 0.061 and mAP50-95 of 0.038, reflecting a relatively low generalization ability (as presented in Figure 5).
YOLOv8 showed a slight improvement over YOLOv5, with more stable loss behavior during training. The precision was 0.50 and the recall was 0.23, with the recall showing an improvement compared to the previous version. The mAP50 0.233 and mAP50-95 0.185 indicate that the model has a slightly superior ability to detect objects correctly, but there is still a gap compared to the YOLOv11 version (as presented in Figure 6).
The YOLOv11 version demonstrated the best performance among the tested versions, with the lowest loss rate across all metrics, reflecting efficient training and better data fitting. YOLOv11 surpassed the previous versions in terms of recall with a value of 0.33 and nearly matched YOLOv5 in precision with a value of 0.71, with the best score in mAP50 at 0.335 and mAP50-95 at 0.292. This indicates an improved ability to detect objects, especially in cases of lower confidence in predictions, which is crucial for applications such as FW detection (as presented in Figure 7).
Table 3 presents a comparison of the training and validation results for the three YOLO versions used in the benchmark (YOLOv5, YOLOv8, and YOLOv11), showing the precision, recall, mAP50, and mAP50-95 metrics. In Appendix A, we present the other general metrics extracted from the model training.
When comparing the three versions of YOLO, we can observe that although YOLOv5 provided reasonable performance, YOLOv8 introduced incremental improvements in terms of stability and detection metrics, especially in precision and recall. However, YOLOv11 showed superior performance, with better results in both precision and recall metrics as well as mAP metrics, reflecting a more efficient and robust model for the task of FW detection. These results indicate that, for more challenging detection problems, such as detecting objects in dynamic environments such as school canteens, the use of more advanced versions of YOLO, such as YOLOv11, may be the best choice, as it provides greater accuracy and better overall performance.

3.2.4. Model Training

To train our model, we chose to use YOLOv11 due to its higher accuracy and efficiency. For this training, we used the Roboflow notebook, publicly available on GitHub [39], and a free instance on Google Colab [40] with an NVIDIA Tesla P4 [41].
The training process began by loading the notebook into Google Colab and then modifying the Roboflow API Keys and required values to enable the retrieval of our dataset from the Roboflow Workspace. Subsequently, we initiated model training, selecting the yolov11l-seg model [42] with 60% for the training set, 23% for the validation set, and 17% for the testing set after 30 epochs at a resolution of 640 px, batch equal to 4, and lr0 equal to 0.01 as the best hyperparameter settings.

3.2.5. Model Architecture

Aiming for both object detection and instance segmentation, we utilized YOLOv11, specifically its yolov11l-seg variant. This variant is particularly crucial for our application as it extends beyond traditional bounding box prediction to perform instance segmentation. Based on a Convolutional Neural Network (CNN) architecture like the illustrated in Figure 8, for each detected object, the model not only predicts its bounding box and class but also generates a precise, pixel-level binary mask delineating its exact shape and area. This capability is fundamental for accurately calculating the area of food and waste items.
Figure 8 provides an example of a CNN architecture for image classification, showcasing an input image passing through convolution, Rectified Linear Unit (ReLU), and pooling layers before a fully connected layer leads to output classes (e.g., “Food” or “Garbage”).
The system employs a multi-class classification approach for detected items. Specific items such as bones, napkins, and grains of rice are categorized under a “garbage” class, which serves as a generic classification for these discarded materials. Overall, the model classifies each detected item into one of three categories: “garbage”, “food”, or ”waste”. This multi-class classification, combined with the pixel-level binary masks generated by the yolov11l-seg variant, allows for precise identification and area calculation of various items on the plates.

3.3. Proposed System

We propose a platform for automatic monitoring of FW in a canteen, based on robust CV techniques for the automatic recognition of food items and the calculation of the waste percentage. One of the main advantages of the platform lies in the controlled environment of a canteen. Here, a weekly menu is provided, and both the trays and the plates have the same standardized format, simplifying the detection process. The platform is able to identify and categorize food items and estimate the percentage of FW on each plate. Following a machine learning methodology, the Figure 9 illustrates the user interaction and data processing involved in the platform. Specifically, there are five key steps in a canteen context environment:
  • After finishing their meal, the users proceeds to the designated area where they place their tray. The tray is placed within the detection zone, clearly marked in the space.
  • Once the tray reaches the detection station, an image of the tray is captured using a camera positioned above it. The image is then sent to the server application.
  • The image is processed and the food recognition phase begins. First, the system detects the plate in the image to isolate the area of interest. Then, the segmentation module separates the food items from one another. After segmentation, a pixelation step refines the detection by analyzing food areas at a detailed pixel level, improving measurement accuracy. Finally, the identified regions are classified using the YOLOv11 model, which assigns each object a label and confidence score.
  • After processing the image, the calculation of FW begins, and then the percentage of waste for the meal is displayed. We consider the FoodArea refers to the area occupied by leftover food, PlateArea is the total area of the plate, and WasteArea represents the space occupied by inedible or discarded items.
  • Calculation: The final step involves calculating the percentage of FW, using the following formula:
    Waste ( % ) = FoodArea PlateArea GarbageArea × 100
  • Finally, the user can remove the tray from the detection area, allowing the process to be initiated for another user. The Figure 9 illustrates the flow of image processing in our FW monitoring system, demonstrating the step-by-step process from image acquisition to waste calculation.
Below is a description of the stages shown in the diagram:
  • Input image: The process begins with the input image of a tray containing food. This image is captured by a camera positioned above the plate to provide a clear view of the meal, including the plate and its contents.
  • Plate detection: The second step focuses on detecting the plate. The system automatically identifies the boundaries of the plate within the input image. This is a crucial step in isolating the food items from the surrounding elements, such as the tray or other objects on the table. Recognizing the plate, the system narrows the area of interest for subsequent processing.
  • Segmentation, pixelation, and identification with YOLOv11: After detecting the plate, the system uses the YOLOv11 model to perform an integrated process of food segmentation, pixel-level analysis, and item identification. First, segmentation visually separates the different components of the meal. Then, the pixelation step refines this separation by analyzing the food areas with a higher level of precision, which is essential for reliable area measurement. Finally, the identification stage classifies the detected objects (such as rice, fork, meat), assigning each one a category and a confidence score. This integrated process allows for accurate distinction between consumed and discarded food, providing a solid foundation for the prediction and waste calculation stages.
  • Prediction: Based on the information obtained in the previous step, the system performs the final prediction of the food items and calculates the estimated percentage of waste. Detected areas are highlighted and labeled with their respective classes and confidence levels.
In the context of waste calculation, to determine the FW on plates, we consider the area of the plate and detect the area of each individual item. Then, for each item, we detect the category of the item (garbage, food, or waste) and analyze its area in pixels with the Formula (6). The system then calculates the percentage of Waste (W) based on the proportion of food left on the plate compared to the total food originally served.
This entire workflow ensures precise identification and quantification of FW in a school canteen environment, providing valuable data to reduce waste and improve food consumption patterns.

4. Experiments and Results

4.1. Systems Architecture

The platform architecture we will examine encompasses the training specifications and the deployment specifications. To train the model and validate it, we used an NVIDIA Tesla T4 [41], one of Google Colab’s [40] free-tier GPU that can be accessed by anyone without any additional costs. To access this GPU, we can, with or without a notebook open, go to the top left corner and click on “Runtime” and then “Change runtime type”. After opening the “Runtime” tab and the “Change runtime type”, we can choose the GPU we want, which is the T4. This will allow us to connect to a “hosted runtime”, which is a cloud instance provided by Google with the GPU we chose before. Finally, to connect to it, we just need to click on the “Connect” button in the top-right corner.
To deploy the trained model and therefore allow a user to test it with some of their images, we created a simple integration between a FastAPI web server [43] and a Svelte app [44]. The FastAPI server will act as the back-end, where all processing will occur, and the Svelte app is where the process is going to be shown to the end user. For instance, if the user uploads an image to the front-end (Svelte app), the back-end (FastAPI) will process it and then return the detection to the front-end, just like the flow shown in the Figure 10.
To process the image, we use YOLOv11 in a direct endpoint on the FastAPI server to easily receive the image file, do the detection, and return the image encoded in base64 so it can be easily accessed in the Svelte app without the need to transfer files between services.
To deploy a demo of the app online, we use a server with the specifications listed in the Table 4. It is important to note that on the specifications there is no GPU; this is because, even though it would provide a significant boost to detection, it can be run without a GPU and the use of a CPU only with the downside of lower performance.

4.2. Exploration and Results

In the previous sections, we presented the context, the dataset, the calculation formula, the YOLO benchmark focusing on version 11 as well as the platform architecture. Based on this structure, we developed a web platform that allows for the exploration of this work in detecting food waste on plates. We tested the platform using 50 images of plates after meals and evaluated the model. Some of these images were previously shown from the dataset and are illustrated in the following figure.
Figure 11 presents visual examples of the detection performed by the platform on different food plates. In images Figure 11a,b,d, multiple objects on the plate are detected, with clustering applied to each identified item, such as cutlery, food, and waste. These images illustrate the platform’s ability to segment and correctly classify various elements present on the plates after the meal. In contrast, image Figure 11c shows only the result of clustering applied to the food, without individual object detection, serving as a comparative baseline of the platform’s operation under different analysis modes. As an illustration, for the four images presented in Figure 11, Table 5 presents the results regarding food waste percentage and the wasted food.
In Figure 11a, the platform correctly identified the cutlery (knife and forks) and a cup; however, it incorrectly detected a plate with traces of soup, which likely led to an inaccurate waste percentage calculation due to the misidentification. In Figure 11b, a plate with food remnants and utensils is observed, where the platform successfully detected, in addition to the cutlery and cup, the presence of garbage beneath the plate, demonstrating the model’s ability to correctly distinguish waste. In Figure 11c, the platform confidently identified the presence of french fries and rice, as well as a plate and a partially visible knife, highlighting its accuracy in detecting uneaten food. Finally, in Figure 11d, the detection covered a wide range of elements, including rice, garbage, potatoes, cutlery, and additional items such as a cup and a spoon, reinforcing the platform’s robustness even in more complex scenarios with multiple items on the plate.
The platform output illustrated in Figure 12 presents, on one side, the detected image, the clustered image, and the waste calculation result, respectively. Additionally, the interface includes a section that describes how the calculation was performed. All images were processed by the proposed platform, as exemplified in Figure 12, where object detection and clustering outputs are generated, followed by the calculation of the corresponding waste percentage. The resulting data from this process are presented in the table below, aiming to demonstrate the platform’s effectiveness across different real-world scenarios.
Additionally, we tested the platform with food plate images from the following datasets: TossIt Plates Computer Vision Project [45], Finding Defects Computer Vision Project [46], and FoodWasteDetectionV2 Computer Vision Project [47].
Some examples of these images are illustrated in the following figure. Three images from each dataset are presented, organized in the same order as the datasets mentioned above and grouped by row. As an illustration, for the nine images presented in Figure 13, Table 6 presents the results regarding food waste percentage and the wasted food.
The individual analysis of each image further highlights the model’s performance in various scenarios. Figure 13a showed a low waste percentage, as it contained only small food remnants. Figure 13b,d,f detected 0% waste. In Figure 13b, the plate contained only a napkin, which was correctly classified as garbage. In Figure 13d, the plate showed no visible food leftovers. Finally, in Figure 13f the plate contained only bones, which were also accurately identified as inedible waste. These cases demonstrate the platform’s ability to effectively distinguish between food, garbage, and non-food objects.
In contrast, Figure 13c,e,g,i presented low to moderate waste percentages, reflecting the presence of small amounts of uneaten food, which the platform was able to detect with precision. Lastly, Figure 13h stood out with a 100% waste percentage, as the plate still contained the full meal, clearly indicating a case of complete non-consumption.
The platform output provides a visual representation of the platform’s processing workflow. On the left side, the interface displays the detected image with annotated objects and their respective classifications, along with a clustered version of the same image to highlight the spatial distribution of the identified items. On the right side, the interface presents a dedicated section that explains the waste calculation formula used by the platform. This section outlines the core principles of the computation, including which elements are considered (e.g., food area, inedible items such as garbage and utensils) and how the waste percentage is derived based on pixel areas. This combination of visual and analytical feedback allows users to understand both the qualitative and quantitative aspects of the food waste detection process.
Similarly, for the nine images presented in Figure 13, which come from external datasets, the same processing pipeline was applied using the proposed platform. As illustrated in Figure 14, the platform performed object detection, clustering, and subsequently calculated the estimated food waste percentage for each image. The results of this analysis are shown in the table below, providing further evidence of the platform’s applicability and robustness when tested on data sources different from the original dataset.
With the exception of some instances of misclassification, the images were generally processed correctly by the platform, and the obtained results largely reflect the actual food waste present on each plate, demonstrating the reliability and consistency of our approach. This precise alignment between automatic detection and real-world conditions is a key aspect of the platform’s effectiveness. To determine whether the image was correctly classified or misclassified, we validated by observation. This means that if the plate is full of food, it is expected to retrieve 100% FW, and if it is half empty, 50%, and so on.
As a research work focusing on the applicability of CV models, particularly YOLOv11, this project naturally presents limitations associated with the base dataset and the performance of the YOLOv11 model. However, we applied the project in the most realistic context possible, and we consider the results promising. Additionally, we’ve paved the way for possible research improvements since the dataset, code, and instructions are all available on the GitHub repository.

5. Conclusions and Future Work

This work focuses on the development of an innovative system for the automatic detection of food waste on school canteen plates, using Computer Vision CV techniques based on the YOLOv11 model. Through the creation of a custom dataset and three benchmarking datasets, the application of DA techniques, and benchmarking across different YOLO versions, the effectiveness of YOLOv11 was demonstrated in accurately identifying food and waste in controlled environments such as school canteens.
The promising performance metrics obtained—mAP of 0.343, precision of 0.62, and recall of 0.322—as well as other evaluation metrics extracted from the model, confirm the system’s potential. It can serve not only to reduce food waste but also as an educational tool and a decision-support resource for nutritionists and school administrators. The open-source availability of the code and trained models reinforces the project’s commitment to sustainability, scientific transparency, and the encouragement of future research.
This work presents a CV platform for FW detection in canteen plates, exploring a research gap in automated FW detection using CV models. The contributions are the following: it provides a recent literature review of the YOLO framework applications for food detection, food in plates detection, and food waste detection, and it provides an open-source GitHub repository, with an implementation of YOLOv11 for food waste detection in canteen plates, which can be freely utilized and explored for research purposes.
Beyond its contribution to food waste mitigation, the system aims to generate valuable insights into consumption and rejection patterns of specific food items, providing a foundation for concrete actions to improve the nutritional quality of school meals.
As future work, we propose expanding the dataset with a larger and more diverse set of images, including different types of plates, food items, and lighting conditions, in order to improve the model’s generalization, as well as to explore images with noise. Additionally, we aim to optimize the system’s accuracy through fine-tuning of the model’s hyperparameters. It will be also interesting to explore other CV pretrained architectures (e.g., MobileNet, ResNet, EfficientNet among other) and compare with the implemented in this work. Further, we will complement the study with large language models in order to compare the performance of the percentage of waste calculation in canteen plates and to promote statements for sustainable food promoting such as the consumption of vegetables. A practical application to explore involves implementing the system in a real school environment using low-cost devices such as the Raspberry Pi. This solution would enable the automatic capture of images at the end of meals and their subsequent processing, providing an accessible and scalable alternative for continuous food waste monitoring. The use of a Raspberry Pi also facilitates integration with existing canteen systems, promoting automated analysis. Finally, it would be valuable to conduct longitudinal studies in schools to evaluate the impact of the system on reducing food waste and improving students’ eating habits.

Author Contributions

Conceptualization, methodology, supervision, investigation, formal analysis and writing original draft preparation, J.R.; software development and visualization, J.F. and P.C.; project administration, J.R., J.F. and P.C.; resources and validation, J.R., all authors contributed equally to the formal analysis, investigation, writing review, and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported in terms of publication fees by ADiT-LAB—Applied Digital Transformation Laboratory, Instituto Politécnico de Viana do Castelo with the grant reference: BII_01_2025_ADiT-Lab.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article. In terms of privacy, camera placement, and data storage, the information used in this work was anonymised and only images of plates were captured without any association to a user. The data (images) were stored in a storage unit the institution of the authors. However, since the code and data will be available on the Github repository, we recommend that the usage of the platform and storage of the data must be anonymous and in accordance with the General Data Protection Regulation.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CNNConvolutional Neural Network
CVComputer Vision
DAData Augumentation
DCNNsDeep Convolutional Neural Networks
DNNDeep Neural Networks
FWFood Waste
GANGenerative Adversarial Networks
GELANGeneralized Efficient Layer Aggregation Network
IoUIntersection over Union
MFDMultidimensional Feature Fusion
PGIProgrammable Gradient Information
ReLURectified Linear Unit
SVMSupport Vector Machines

Appendix A. Computer Vision Model General Metrics

Figure A1. The model’s metrics table.
Figure A1. The model’s metrics table.
Applsci 15 07137 g0a1

References

  1. European Communities; Directorate-General for Environment. Preparatory Study on Food Waste Across EU 27—Final Report; Publications Office: Luxembourg, 2011. [Google Scholar] [CrossRef]
  2. Derqui, B.; Fernandez, V.; Fayos, T. Towards more sustainable food systems. Addressing food waste at school canteens. Appetite 2018, 129, 1–11. [Google Scholar] [CrossRef]
  3. Belot, M.; James, J. Healthy school meals and educational outcomes. J. Health Econ. 2011, 30, 489–504. [Google Scholar] [CrossRef] [PubMed]
  4. Cohen, J.F.W.; Richardson, S.A.; Cluggish, S.A.; Parker, E.; Catalano, P.J.; Rimm, E.B. Effects of Choice Architecture and Chef-Enhanced Meals on the Selection and Consumption of Healthier School Foods: A Randomized Clinical Trial. JAMA Pediatr. 2015, 169, 431–437. [Google Scholar] [CrossRef] [PubMed]
  5. What Is YOLO? The Ultimate Guide [2025]—blog.roboflow.com. Available online: https://blog.roboflow.com/guide-to-yolo-models/ (accessed on 11 March 2025).
  6. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  7. Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  8. A Guide to the YOLO Family of Computer Vision Models—dataphoenix.info. Available online: https://dataphoenix.info/a-guide-to-the-yolo-family-of-computer-vision-models/ (accessed on 11 March 2025).
  9. Alkandary, K.; Yildiz, A.S.; Meng, H. A Comparative Study of YOLO Series (v3–v10) with DeepSORT and StrongSORT: A Real-Time Tracking Performance Study. Electronics 2025, 14, 876. [Google Scholar] [CrossRef]
  10. Evolution of YOLO: A Timeline of Versions and Advancements in Object Detection-YOLOvX by Wiserli!—yolovx.com. Available online: https://yolovx.com/evolution-of-yolo-a-timeline-of-versions-and-advancements-in-object-detection/ (accessed on 11 March 2025).
  11. Boesch, G. YOLOv8: A Complete Guide [2025 Update]-viso.ai—viso.ai. Available online: https://viso.ai/deep-learning/yolov8-guide/ (accessed on 11 March 2025).
  12. Boesch, G. YOLO Explained: From v1 to Present-viso.ai—viso.ai. Available online: https://viso.ai/computer-vision/yolo-explained (accessed on 11 March 2025).
  13. Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
  14. Mehta, P.; Vaghela, R.; Pansuriya, N.; Sarda, J.; Bhatt, N.; Bhoi, A.K.; Srinivasu, P.N. Benchmarking YOLO Variants for Enhanced Blood Cell Detection. Int. J. Imaging Syst. Technol. 2025, 35, e70037. [Google Scholar] [CrossRef]
  15. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
  16. Khanam, R.; Hussain, M. YOLOv11: An Overview of the Key Architectural Enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
  17. Boesch, G. YOLO11: A New Iteration of “You Only Look Once”—viso.ai. Available online: https://viso.ai/computer-vision/yolov11/ (accessed on 11 March 2025).
  18. Zhang, Y.; Deng, L.; Zhu, H.; Wang, W.; Ren, Z.; Zhou, Q.; Lu, S.; Sun, S.; Zhu, Z.; Gorriz, J.M.; et al. Deep learning in food category recognition. Inf. Fusion 2023, 98, 101859. [Google Scholar] [CrossRef]
  19. Lan, X.; Lyu, J.; Jiang, H.; Dong, K.; Niu, Z.; Zhang, Y.; Xue, J. FoodSAM: Any Food Segmentation. IEEE Trans. Multimed. 2024, 27, 2795–2808. [Google Scholar] [CrossRef]
  20. Martinel, N.; Foresti, G.L.; Micheloni, C. Wide-Slice Residual Networks for Food Recognition. arXiv 2016, arXiv:1612.06543. [Google Scholar]
  21. Goswami, A.; Liu, H. Deep Dish: Deep Learning for Classifying Food Dishes; Stanford University: Stanford, CA, USA, 2017. [Google Scholar]
  22. Ciocca, G.; Micali, G.; Napoletano, P. State Recognition of Food Images Using Deep Features. IEEE Access 2020, 8, 32003–32017. [Google Scholar] [CrossRef]
  23. Ciocca, G.; Napoletano, P.; Schettini, R. Food Recognition and Leftover Estimation for Daily Diet Monitoring; Springer: Cham, Switzerland, 2015; Volume 9281, pp. 334–341. [Google Scholar] [CrossRef]
  24. Ciocca, G.; Napoletano, P.; Schettini, R. Food Recognition: A New Dataset, Experiments, and Results. IEEE J. Biomed. Health Inform. 2016, 21, 588–598. [Google Scholar] [CrossRef] [PubMed]
  25. Pouladzadeh, P.; Shirmohammadi, S. Mobile Multi-Food Recognition Using Deep Learning. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2017, 13, 1–21. [Google Scholar] [CrossRef]
  26. Sousa, J.; Rebelo, A.; Cardoso, J.S. Automation of Waste Sorting with Deep Learning. In Proceedings of the 2019 XV Workshop de Visão Computacional (WVC), São Bernardo do Campo, Brazil, 9–11 September 2019; pp. 43–48. [Google Scholar] [CrossRef]
  27. Fan, J.; Cui, L.; Fei, S. Waste Detection System Based on Data Augmentation and YOLO_EC. Sensors 2023, 23, 3646. [Google Scholar] [CrossRef] [PubMed]
  28. Dhelia, A.; Chordia, S.; B, K. YOLO-Based Food Damage Detection: An Automated Approach for Quality Control in Food Industry. In Proceedings of the 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 3–5 October 2024; pp. 1444–1449. [Google Scholar] [CrossRef]
  29. Team, R. Food Waste Detection Instance Segmentation Dataset and Pre-Trained Model by Project-Eyfif—universe.roboflow.com. 2025. Available online: https://universe.roboflow.com/project-eyfif/proj3-food-waste-detection/ (accessed on 2 February 2025).
  30. Team, R. Roboflow: Computer Vision Tools for Developers and Enterprises—roboflow.com. 2025. Available online: https://roboflow.com/ (accessed on 2 February 2025).
  31. Team, R. Roboflow Annotate: Label Images Faster Than Ever—roboflow.com. 2025. Available online: https://roboflow.com/annotate (accessed on 2 February 2025).
  32. Biswas, A.; Nasim, M.A.A.; Imran, A.; Sejuty, A.T.; Fairooz, F.; Puppala, S.; Talukder, S. Generative Adversarial Networks for Data Augmentation. arXiv 2023, arXiv:2306.02019. [Google Scholar]
  33. Singh, A.; Jaiswal, V.; Joshi, G.; Sanjeeve, A.; Gite, S.; Kotecha, K. Neural Style Transfer: A Critical Review. IEEE Access 2021, 9, 131583–131613. [Google Scholar] [CrossRef]
  34. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  35. Passos, C.A.; Haddad, R.B. Benchmarking: A tool for the improvement of production management. IFAC Proc. Vol. 2013, 46, 577–581. [Google Scholar] [CrossRef]
  36. Talha, M.M.; Khan, H.U.; Iqbal, S.; Alghobiri, M.; Iqbal, T.; Fayyaz, M. Deep learning in news recommender systems: A comprehensive survey, challenges and future trends. Neurocomputing 2023, 562, 126881. [Google Scholar] [CrossRef]
  37. Im, Y.J.; Chang, T.W.; Park, S.; Kim, H.S. Object detection models and active learning for improvement of e-waste collection management systems in Korea. Waste Manag. 2025, 194, 379–389. [Google Scholar] [CrossRef]
  38. Park, S.S.; Tran, V.T.; Lee, D.E. Application of Various YOLO Models for Computer Vision-Based Real-Time Pothole Detection. Appl. Sci. 2021, 11, 11229. [Google Scholar] [CrossRef]
  39. Notebooks/Notebooks/Train-Yolo11-Instance-Segmentation-on-Custom-Dataset.Ipynb at Main · Roboflow/Notebooks. Available online: https://github.com/roboflow/notebooks/blob/main/notebooks/train-yolo11-instance-segmentation-on-custom-dataset.ipynb?ref=blog.roboflow.com (accessed on 29 April 2025).
  40. Google Colab. Available online: https://colab.google/ (accessed on 4 February 2025).
  41. NVIDIA T4 Tensor Core GPUs for Accelerating Inference—nvidia.com. Available online: https://www.nvidia.com/en-us/data-center/tesla-t4/ (accessed on 4 February 2025).
  42. Ultralytics. YOLO11—docs.ultralytics.com. Available online: https://docs.ultralytics.com/pt/models/yolo11/#supported-tasks-and-modes (accessed on 29 April 2025).
  43. FastAPI—fastapi.tiangolo.com. Available online: https://fastapi.tiangolo.com/ (accessed on 5 February 2025).
  44. Svelte: Web Development for the Rest of US—svelte.dev. Available online: https://svelte.dev/ (accessed on 5 February 2025).
  45. Tossit. Tossit Plates Dataset. 2024. Available online: https://universe.roboflow.com/tossit/tossit-plates (accessed on 29 April 2025).
  46. Seoyoung. Finding Defects Dataset. 2024. Available online: https://universe.roboflow.com/seoyoung-tzzvn/finding-defects (accessed on 29 April 2025).
  47. Koo. FoodWasteDetectionV2 Dataset. 2024. Available online: https://universe.roboflow.com/koo-a3lwf/foodwastedetectionv2 (accessed on 30 April 2025).
Figure 1. Key architectural modules in YOLOv11.
Figure 1. Key architectural modules in YOLOv11.
Applsci 15 07137 g001
Figure 2. Flowchart of Food Waste detection using YOLOv11.
Figure 2. Flowchart of Food Waste detection using YOLOv11.
Applsci 15 07137 g002
Figure 3. Some images from the custom dataset.
Figure 3. Some images from the custom dataset.
Applsci 15 07137 g003
Figure 4. Example of the rotating DA in the dataset.
Figure 4. Example of the rotating DA in the dataset.
Applsci 15 07137 g004
Figure 5. Training and validation loss and metrics for YOLOv5.
Figure 5. Training and validation loss and metrics for YOLOv5.
Applsci 15 07137 g005
Figure 6. Training and validation loss and metrics for YOLOv8.
Figure 6. Training and validation loss and metrics for YOLOv8.
Applsci 15 07137 g006
Figure 7. Training and validation loss and metrics for YOLOv11.
Figure 7. Training and validation loss and metrics for YOLOv11.
Applsci 15 07137 g007
Figure 8. An example of CNN architecture for image classification.
Figure 8. An example of CNN architecture for image classification.
Applsci 15 07137 g008
Figure 9. Flow to calculate the waste percentage using the YOLO model.
Figure 9. Flow to calculate the waste percentage using the YOLO model.
Applsci 15 07137 g009
Figure 10. General app flow implementation.
Figure 10. General app flow implementation.
Applsci 15 07137 g010
Figure 11. Exploration results for some image detections.
Figure 11. Exploration results for some image detections.
Applsci 15 07137 g011
Figure 12. Platform interface showing object detection (a) and waste calculation output (b).
Figure 12. Platform interface showing object detection (a) and waste calculation output (b).
Applsci 15 07137 g012
Figure 13. Exploration results for some image detections from other datasets.
Figure 13. Exploration results for some image detections from other datasets.
Applsci 15 07137 g013
Figure 14. Platform interface showing object detection (a) and waste calculation output (b).
Figure 14. Platform interface showing object detection (a) and waste calculation output (b).
Applsci 15 07137 g014
Table 1. Implementations of CV in food detection and FW.
Table 1. Implementations of CV in food detection and FW.
PaperYearCategoryCV Type
Martinel N. et al. [20]2016Food detectionCNN
Goswami A. et al. [21]2017Food detectionCNN
Ciocca G. et al. [22]2020Food detectionCNN
Ciocca G. et al. [23]2015Food detection in platesCNN
Ciocca G. et al. [24]2016Food detection in platesCNN
Pouladzadeh P. et al. [25]2017Food detection in platesCNN
Sousa J. et al. [26]2019FW detectionFaster R-CNN
Fan J. et al. [27]2023FW detectionYOLOv4
Dhelia A. et al. [28]2024FW detectionYOLOv8
Table 2. Database analytics.
Table 2. Database analytics.
TypeData
Number of Images175
Number of Annotations1265 (avg. 7.2 per image)
Average Image Size2.36 mp
Median Image Ratio1200 × 2000
Table 3. Benchmarking results from all YOLO versions.
Table 3. Benchmarking results from all YOLO versions.
YOLO VersionPrecisionRecallmAP50mAP50-95
YOLO v50.7140.05960.06150.0397
YOLO v80.4170.2370.2390.191
YOLO v110.620.3220.3430.302
Table 4. Server specifications.
Table 4. Server specifications.
HardwareSpecification
OSUbuntu 24.04
CPUIntel Xeon E5-2650 (10 cores)
CPU Sockets2
RAM24 GB DDR3
GPU
DISK128 GB HDD SAS
VIRTUALIZATIONVirtual Machine (KVM)
MANUFACTURERHewlett Packard Enterprise, Palo Alto, CA, USA
Table 5. Waste percentage detected on the sample images from Figure 11.
Table 5. Waste percentage detected on the sample images from Figure 11.
ImageWaste PercentageWasted Food
Figure 11a0.00
Figure 11b0.00
Figure 11c100.00Rice, French Fries, Breaded
Figure 11d42.84Rice, Baked Potatoes
Table 6. Waste percentage detected on the sample images from Figure 13.
Table 6. Waste percentage detected on the sample images from Figure 13.
ImageWaste PercentageWaste Food
Figure 13a1.70Baked Potatoes
Figure 13b0.00
Figure 13c29.26Scrambled Eggs
Figure 13d0.00
Figure 13e22.10Rice
Figure 13f0.00
Figure 13g11.15Steak, French Fries
Figure 13h100.00Pasta, Beef
Figure 13i23.14Steak, French Fries
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ferreira, J.; Cerqueira, P.; Ribeiro, J. Food Waste Detection in Canteen Plates Using YOLOv11. Appl. Sci. 2025, 15, 7137. https://doi.org/10.3390/app15137137

AMA Style

Ferreira J, Cerqueira P, Ribeiro J. Food Waste Detection in Canteen Plates Using YOLOv11. Applied Sciences. 2025; 15(13):7137. https://doi.org/10.3390/app15137137

Chicago/Turabian Style

Ferreira, João, Paulino Cerqueira, and Jorge Ribeiro. 2025. "Food Waste Detection in Canteen Plates Using YOLOv11" Applied Sciences 15, no. 13: 7137. https://doi.org/10.3390/app15137137

APA Style

Ferreira, J., Cerqueira, P., & Ribeiro, J. (2025). Food Waste Detection in Canteen Plates Using YOLOv11. Applied Sciences, 15(13), 7137. https://doi.org/10.3390/app15137137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop