mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments
Abstract
:1. Introduction
- We propose a novel segmentation model called mid-DeepLabv3+. Our model is inspired by the well-known semantic segmentation architecture DeepLabv3+ [9], with three key modifications. Firstly, our backbone is a reduced version of ResNet50, in which we have excluded the last convolution block. This modification reduces the number of parameters of the model but also affects the performance. To recover the loss in performance, we secondly introduce an additional middle layer in the decoder path and thirdly a SimAM [10] attention mechanism. The new middle layer reintroduces more general extracted features that may have potentially been lost in the encoder’s path. These enhancements aim to improve the model’s segmentation performance while making it twice as light as the DeepLabv3+ model with the ResNet50 backbone. The mid-DeepLabv3+ code is shared on Github (https://github.com/babanthierry94/mid-DeepLabv3plus (accessed on 26 December 2023)).
- Public food segmentation datasets are rare, and the construction of a new dataset remains a tedious task, but one that enables us to advance research in the field. We present the first dataset for food image segmentation focusing on African cuisine. This dataset, named CamerFood10, includes images of the ten most consumed Cameroonian food classes. It is composed of a total of 1422 images, divided into a training set of 1032 images and a validation set of 209 images. The CamerFood10 dataset is publicly available (https://drive.google.com/drive/u/1/folders/1MugfmVehtIjjyqtphs-4u0GksuHy3Vjz (accessed on 26 December 2023)).
2. Related Work
2.1. Food Image Dataset
Year | Name | Usage | No. of Classes | No. of Images | Collection Method | Origin |
---|---|---|---|---|---|---|
2012 | UECFOOD100 [13] | D | 100 | 9060 | Web | As, Eu |
2014 | UECFOOD256 [14] | D | 256 | 31,397 | Web | As, Eu |
2015 | Food201-Segmented [22] | S | 201 | 12,525 | Food-101 [21] | USA |
2017 | School Lunch Dataset [23] | D | 21 | 3940 | Normalized | As |
2017 | UNIMIB2016 [18] | S | 73 | 1027 | Normalized | Italia |
2019 | SUEC Food [24] | S | 256 | 31,995 | UECFOOD256 | As, Eu |
2020 | MyFood [15] | S | 9 | 1250 | Web | Brazil |
2020 | Food50Seg [25] | S | 50 | 5000 | Web | China |
2021 | MyFoodRepo-273 [26] | S | 273 | 24,119 | Users | Eu |
2021 | FoodSeg103 [16] | S | 103 | 9490 | Web | As, Eu |
2021 | UECFoodPixComplete [17] | S | 102 | 10,000 | UECFOOD100 | As, Eu |
2022 | ChineseDiabetesFood187 [20] | S | 187 | 4052 | Users | China |
2023 | CamerFood10 (ours) | S | 10 | 1241 | Web | Cameroon |
2.2. Segmentation Model for Food Image
3. Proposed Method
3.1. CamerFood10 Dataset
3.2. Proposed Model Architecture
3.2.1. Backbone and Feature Extraction
3.2.2. Encoder
3.2.3. Decoder
3.2.4. Attention Module
Algorithm 1 Tensorflow-like implementation of SimAM |
|
4. Experiments and Results
4.1. Compared Approaches
4.1.1. U-Net
4.1.2. GoumetNet
4.1.3. FCN-8
4.1.4. ResUnet
4.1.5. DeepLabv3+
4.2. Evaluation Metrics
4.2.1. Mean Pixel Accuracy (mPA)
4.2.2. Mean Intersection Over Union (mIoU)
4.3. Experimental Environment
4.4. Results
4.4.1. mid-DeepLabv3+ Performance Analysis
4.4.2. Performance with Different Attention Mechanisms
4.4.3. Comparison with Other CNN Benchmark Models
4.4.4. Evaluation on Another Dataset
4.4.5. CamerFood10 Class Performance Analysis
5. Conclusions and Discussion
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- World Health Organization. Noncommunicable Diseases: Progress Monitor 2022; World Health Organization: Genova, Switzerland, 2022. [Google Scholar]
- Iriti, M.; Varoni, E.M.; Vitalini, S. Healthy diets and modifiable risk factors for non-communicable diseases—The European perspective. Foods 2020, 9, 940. [Google Scholar] [CrossRef] [PubMed]
- Min, W.; Jiang, S.; Liu, L.; Rui, Y.; Jain, R. A survey on food computing. ACM Comput. Surv. 2019, 52, 1–36. [Google Scholar] [CrossRef]
- Wang, W.; Min, W.; Li, T.; Dong, X.; Li, H.; Jiang, S. A review on vision-based analysis for automatic dietary assessment. Trends Food Sci. Technol. 2022, 122, 223–237. [Google Scholar] [CrossRef]
- Subhi, M.A.; Ali, S.H.; Mohammed, M.A. Vision-based approaches for automatic food recognition and dietary assessment: A survey. IEEE Access 2019, 7, 35370–35381. [Google Scholar] [CrossRef]
- Tay, W.; Kaur, B.; Quek, R.; Lim, J.; Henry, C.J. Current developments in digital quantitative volume estimation for the optimisation of dietary assessment. Nutrients 2020, 12, 1167. [Google Scholar] [CrossRef] [PubMed]
- Tahir, G.A.; Loo, C.K. A comprehensive survey of image-based food recognition and volume estimation methods for dietary assessment. Healthcare 2021, 9, 1676. [Google Scholar] [CrossRef] [PubMed]
- Lu, Y.; Stathopoulou, T.; Vasiloglou, M.F.; Pinault, L.F.; Kiley, C.; Spanakis, E.K.; Mougiakakou, S. goFOODTM: An artificial intelligence system for dietary assessment. Sensors 2020, 20, 4283. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (PMLR), Virtual Event, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
- Konstantakopoulos, F.S.; Georga, E.I.; Fotiadis, D.I. A Review of Image-based Food Recognition and Volume Estimation Artificial Intelligence Systems. IEEE Rev. Biomed. Eng. 2023; online ahead of print. [Google Scholar] [CrossRef]
- Park, D.; Lee, J.; Lee, J.; Lee, K. Deep learning based food instance segmentation using synthetic data. In Proceedings of the 18th International Conference on Ubiquitous Robots (UR), Gangneung, Republic of Korea, 12–14 July 2021; pp. 499–505. [Google Scholar]
- Matsuda, Y.; Hoashi, H.; Yanai, K. Recognition of multiple-food images by detecting candidate regions. In Proceedings of the IEEE International Conference on Multimedia and Expo, Melbourne, VIC, Australia, 9–13 July 2012; pp. 25–30. [Google Scholar]
- Kawano, Y.; Yanai, K. Automatic Expansion of a Food Image Dataset Leveraging Existing Categories with Domain Adaptation. In Proceedings of the Computer Vision-ECCV 2014 Workshops, Zurich, Switzerland, 6–7 September 2014; Proceedings, Part III 13. Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–17. [Google Scholar]
- Freitas, C.N.; Cordeiro, F.R.; Macario, V. Myfood: A food segmentation and classification system to aid nutritional monitoring. In Proceedings of the 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Virtual Event, 7–10 September 2020; pp. 234–239. [Google Scholar]
- Wu, X.; Fu, X.; Liu, Y.; Lim, E.P.; Hoi, S.C.; Sun, Q. A large-scale benchmark for food image segmentation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, 20–24 October 2021; pp. 506–515. [Google Scholar]
- Okamoto, K.; Yanai, K. UEC-FoodPIX Complete: A Large-Scale Food Image Segmentation Dataset. In Proceedings of the Pattern Recognition ICPR International Workshops and Challenges, Virtual Event, 10–15 January 2021; Proceedings, Part V. Springer: Berlin/Heidelberg, Germany, 2021; pp. 647–659. [Google Scholar]
- Ciocca, G.; Napoletano, P.; Schettini, R. Food recognition: A new dataset, experiments, and results. IEEE J. Biomed. Health Inform. 2016, 21, 588–598. [Google Scholar] [CrossRef]
- Jalal, M.; Wang, K.; Jefferson, S.; Zheng, Y.; Nsoesie, E.O.; Betke, M. Scraping social media photos posted in Kenya and elsewhere to detect and analyze food types. In Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management, Nice, France, 21 October 2019; pp. 50–59. [Google Scholar]
- Liang, Y.; Li, J.; Zhao, Q.; Rao, W.; Zhang, C.; Wang, C. Image Segmentation and Recognition for Multi-Class Chinese Food. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3938–3942. [Google Scholar]
- Bossard, L.; Guillaumin, M.; Van Gool, L. Food-101—Mining Discriminative Components with Random Forests. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part VI 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 446–461. [Google Scholar]
- Meyers, A.; Johnston, N.; Rathod, V.; Korattikara, A.; Gorban, A.; Silberman, N.; Guadarrama, S.; Papandreou, G.; Huang, J.; Murphy, K.P. Im2Calories: Towards an automated mobile vision food diary. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1233–1241. [Google Scholar]
- Ege, T.; Yanai, K. Estimating food calories for multiple-dish food photos. In Proceedings of the 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 26–29 November 2017; pp. 646–651. [Google Scholar]
- Gao, J.; Tan, W.; Ma, L.; Wang, Y.; Tang, W. MUSEFood: Multi-Sensor-based food volume estimation on smartphones. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; pp. 899–906. [Google Scholar]
- Aslan, S.; Ciocca, G.; Mazzini, D.; Schettini, R. Benchmarking algorithms for food localization and semantic segmentation. Int. J. Mach. Learn. Cybern. 2020, 11, 2827–2847. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Singhal, G.; Scuccimarra, E.A.; Kebaili, D.; Héritier, H.; Boulanger, V.; Salathé, M. The food recognition benchmark: Using deep learning to recognize food in images. Front. Nutr. 2022, 9, 875143. [Google Scholar] [CrossRef] [PubMed]
- Chopra, M.; Purwar, A. Recent studies on segmentation techniques for food recognition: A survey. Arch. Comput. Methods Eng. 2022, 29, 865–878. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
- Dehais, J.; Anthimopoulos, M.; Mougiakakou, S. Food image segmentation for dietary assessment. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016; pp. 23–28. [Google Scholar]
- Wang, Y.; Liu, C.; Zhu, F.; Boushey, C.J.; Delp, E.J. Efficient superpixel based segmentation for food image analysis. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2544–2548. [Google Scholar]
- E Silva, B.V.R.; Rad, M.G.; Cui, J.; McCabe, M.; Pan, K. A mobile-based diet monitoring system for obesity management. J. Health Med. Inform. 2018, 9, 307. [Google Scholar] [PubMed]
- Kawano, Y.; Yanai, K. Foodcam: A real-time food recognition system on a smartphone. Multimed. Tools Appl. 2015, 74, 5263–5287. [Google Scholar] [CrossRef]
- Pouladzadeh, P.; Kuhad, P.; Peddi, S.V.B.; Yassine, A.; Shirmohammadi, S. Food calorie measurement using deep learning neural network. In Proceedings of the 2016 IEEE International Instrumentation and Measurement Technology Conference Proceedings, Taipei, Taiwan, 23–26 May 2016; pp. 1–6. [Google Scholar]
- Bolanos, M.; Radeva, P. Simultaneous food localization and recognition. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 3140–3145. [Google Scholar]
- Sun, J.; Radecka, K.; Zilic, Z. Foodtracker: A real-time food detection mobile application by deep convolutional neural networks. arXiv 2019, arXiv:1909.05994. [Google Scholar]
- Chiang, M.L.; Wu, C.A.; Feng, J.K.; Fang, C.Y.; Chen, S.W. Food calorie and nutrition analysis system based on mask R-CNN. In Proceedings of the IEEE 5th International Conference on Computer and Communications (ICCC), Chengdu, China, 6–9 December 2019; pp. 1721–1728. [Google Scholar]
- Sharma, U.; Artacho, B.; Savakis, A. Gourmetnet: Food segmentation using multi-scale waterfall features with spatial and channel attention. Sensors 2021, 21, 7504. [Google Scholar] [CrossRef]
- Artacho, B.; Savakis, A. Omnipose: A multi-scale framework for multi-person pose estimation. arXiv 2021, arXiv:2103.10180. [Google Scholar]
- Aguilar, E.; Nagarajan, B.; Remeseiro, B.; Radeva, P. Bayesian deep learning for semantic segmentation of food images. Comput. Electr. Eng. 2022, 103, 108380. [Google Scholar] [CrossRef]
- Sobngwi, E. Atlas des Aliments de Consommation Courante au Cameroun, 1st ed.; RSD Institute (Recherche Sante et Développement): Yaounde, Cameroon, 2021; p. 228. [Google Scholar]
- Dutta, A.; Zisserman, A. The VIA annotation software for images, audio and video. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2276–2279. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Fakhfakh, M.; Bouaziz, B.; Gargouri, F.; Chaari, L. ProgNet: COVID-19 Prognosis Using Recurrent and Convolutional Neural Networks. Open Med. Imaging J. 2019, 12, 1–7. [Google Scholar] [CrossRef]
- Das, S.; Fime, A.A.; Siddique, N.; Hashem, M. Estimation of road boundary for intelligent vehicles based on deepLabV3+ architecture. IEEE Access 2021, 9, 121060–121075. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Guo, M.H.; Xu, T.X.; Liu, J.J.; Liu, Z.N.; Jiang, P.T.; Mu, T.J.; Zhang, S.H.; Martin, R.R.; Cheng, M.M.; Hu, S.M. Attention mechanisms in computer vision: A survey. Comput. Vis. Media 2022, 8, 331–368. [Google Scholar] [CrossRef]
- Jia, J.; Song, J.; Kong, Q.; Yang, H.; Teng, Y.; Song, X. Multi-Attention-Based Semantic Segmentation Network for Land Cover Remote Sensing Images. Electronics 2023, 12, 1347. [Google Scholar] [CrossRef]
- Azad, R.; Asadi-Aghbolaghi, M.; Fathy, M.; Escalera, S. Attention deeplabv3+: Multi-Level Context Attention Mechanism for Skin Lesion Segmentation. In Proceedings of the Computer Vision–ECCV 2020 Workshops; Glasgow, UK, 23–28 August 2020, Proceedings, Part I 16; Springer: Berlin/Heidelberg, Germany, 2020; pp. 251–266. [Google Scholar]
- Zeng, H.; Peng, S.; Li, D. Deeplabv3+ semantic segmentation model based on feature cross attention mechanism. J. Phys. Conf. Ser. 2020, 1678, 012106. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Park, J.; Woo, S.; Lee, J.Y.; Kweon, I.S. Bam: Bottleneck attention module. arXiv 2018, arXiv:1807.06514. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to attend: Convolutional triplet attention module. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 3139–3148. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Balouek, D.; Carpen Amarie, A.; Charrier, G.; Desprez, F.; Jeannot, E.; Jeanvoine, E.; Lèbre, A.; Margery, D.; Niclausse, N.; Nussbaum, L.; et al. Adding Virtualization Capabilities to the Grid’5000 Testbed. In Cloud Computing and Services Science; Ivanov, I.I., van Sinderen, M., Leymann, F., Shan, T., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2013; Volume 367, pp. 3–20. [Google Scholar]
One Class | Two Classes | Three Classes | Four Classes | |
---|---|---|---|---|
Training | 76.65% | 22.09% | 1.16% | 0.10% |
Validation | 81.82% | 17.70% | 0.48% | 0.0% |
Architecture | mIoU (%) | mPA (%) |
---|---|---|
DeepLabv3+ (ResNet50) [9] | 58.88 | 88.73 |
Baseline | 62.46 | 89.87 |
Baseline and SimAM | 64.13 | 90.37 |
Baseline and Mid-Layer | 65.17 | 90.46 |
Baseline and Mid-Layer and SimAM (ours) | 65.20 | 90.47 |
Attention Mechanism | mIoU (%) | mPA (%) | #Params |
---|---|---|---|
Our model + SimAM [10] | 65.20 | 90.47 | 10,406,762 |
ECA [53] | 62.60 | 89.79 | +2,100 |
CoordAtt [55] | 62.41 | 89.83 | +900 |
TripletAtt [54] | 62.20 | 89.88 | +11,800 |
SE [50] | 64.33 | 90.27 | +11,200 |
BAM [52] | 63.78 | 90.03 | +33,200 |
CBAM [51] | 60.79 | 89.11 | +11,500 |
Model | Backbone | mIoU (%) | mPA (%) | #Params (M) | #FLOPS (M) |
---|---|---|---|---|---|
DeepLabv3+ [9] | ResNet50 | 58.88 | 88.73 | 27.91 | 1397.47 |
DeepLabv3+ [9] | ResNet101 | 51.67 | 86.48 | 46.98 | 1785.36 |
DeepLabv3+ [9] | Xception | 58.15 | 88.27 | 42.19 | 755.45 |
GourmetNet [37] | ResNet101 | 27.52 | 76.24 | 47.39 | 2549.08 |
Unet [58] | VGG16 | 49.74 | 87.86 | 25.86 | 4372.14 |
FCN-8 [56] | ResNet50 | 61.88 | 89.67 | 451.51 | 2756.86 |
FCN-8 [56] | VGG16 | 49.18 | 85.01 | 134.35 | 790.44 |
ResUnet [57] | – | 43.79 | 85.23 | 8.23 | 3621.53 |
mid-DeepLabv3+ | ResNet50 | 65.20 | 90.47 | 10.41 | 1465.92 |
Model | Backbone | mIoU (%) | mPA (%) |
---|---|---|---|
DeepLabv3+ [9] | ResNet50 | 68.10 | 88.81 |
DeepLabv3+ [9] | ResNet101 | 65.61 | 87.74 |
DeepLabv3+ [9] | Xception | 62.24 | 85.81 |
GourmetNet [37] | ResNet101 | 30.12 | 70.70 |
Unet [58] | VGG16 | 56.94 | 83.07 |
FCN-8 [56] | ResNet50 | 63.67 | 87.15 |
FCN-8 [56] | VGG16 | 52.53 | 80.87 |
ResUnet [57] | – | 43.13 | 76.57 |
mid-Deeplabv3+ | ResNet50 | 69.23 | 89.26 |
Class Name | mIoU (%) |
---|---|
White rice | 77.84 |
Fried plantain | 76.12 |
Puff-puff | 72.41 |
Koki | 63.80 |
Tomato soup | 61.88 |
Waterfufu | 60.57 |
Taro | 58.98 |
Bobolo | 53.69 |
Yellow soup | 53.44 |
Beans | 48.95 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Baban A Erep, T.R.; Chaari, L. mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments. Sensors 2024, 24, 209. https://doi.org/10.3390/s24010209
Baban A Erep TR, Chaari L. mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments. Sensors. 2024; 24(1):209. https://doi.org/10.3390/s24010209
Chicago/Turabian StyleBaban A Erep, Thierry Roland, and Lotfi Chaari. 2024. "mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments" Sensors 24, no. 1: 209. https://doi.org/10.3390/s24010209
APA StyleBaban A Erep, T. R., & Chaari, L. (2024). mid-DeepLabv3+: A Novel Approach for Image Semantic Segmentation Applied to African Food Dietary Assessments. Sensors, 24(1), 209. https://doi.org/10.3390/s24010209