Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = automatic dietary monitoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 13449 KB  
Article
Multi-View Edge Attention Network for Fine-Grained Food Image Segmentation
by Chengxu Liu, Guorui Sheng, Weiqing Min, Xiaojun Wu and Shuqiang Jiang
Foods 2025, 14(17), 3016; https://doi.org/10.3390/foods14173016 - 28 Aug 2025
Viewed by 1574
Abstract
Precisely identifying and delineating food regions automatically from images, a task known as food image segmentation, is crucial for enabling applications in food science such as automated dietary logging, accurate nutritional analysis, and food safety monitoring. However, accurately segmenting food images, particularly delineating [...] Read more.
Precisely identifying and delineating food regions automatically from images, a task known as food image segmentation, is crucial for enabling applications in food science such as automated dietary logging, accurate nutritional analysis, and food safety monitoring. However, accurately segmenting food images, particularly delineating food edges with precision, remains challenging due to the wide variety and diverse forms of food items, frequent inter-food occlusion, and ambiguous boundaries between food and backgrounds or containers. To overcome these challenges, we proposed a novel method called the Multi-view Edge Attention Network (MVEANet), which focuses on enhancing the fine-grained segmentation of food edges. The core idea behind this method is to integrate information obtained from observing food from different perspectives to achieve a more comprehensive understanding of its shape and specifically to strengthen the processing capability for food contour details. Rigorous testing on two large public food image datasets, FoodSeg103 and UEC-FoodPIX Complete, demonstrates that MVEANet surpasses existing state-of-the-art methods in segmentation accuracy, performing exceptionally well in depicting clear and precise food boundaries. This work provides the field of food science with a more accurate and reliable tool for automated food image segmentation, offering strong technical support for the development of more intelligent dietary assessment, nutritional research, and health management systems. Full article
(This article belongs to the Special Issue Food Computing-Enabled Precision Nutrition)
Show Figures

Figure 1

11 pages, 3294 KB  
Article
Toward a User-Accessible Spectroscopic Sensing Platform for Beverage Recognition Through K-Nearest Neighbors Algorithm
by Luca Montaina, Elena Palmieri, Ivano Lucarini, Luca Maiolo and Francesco Maita
Sensors 2025, 25(14), 4264; https://doi.org/10.3390/s25144264 - 9 Jul 2025
Cited by 1 | Viewed by 1902
Abstract
Proper nutrition is a fundamental aspect to maintaining overall health and well-being, influencing both physical and social aspects of human life; an unbalanced or inadequate diet can lead to various nutritional deficiencies and chronic health conditions. In today’s fast-paced world, monitoring nutritional intake [...] Read more.
Proper nutrition is a fundamental aspect to maintaining overall health and well-being, influencing both physical and social aspects of human life; an unbalanced or inadequate diet can lead to various nutritional deficiencies and chronic health conditions. In today’s fast-paced world, monitoring nutritional intake has become increasingly important, particularly for those with specific dietary needs. While smartphone-based applications using image recognition have simplified food tracking, they still rely heavily on user interaction and raise concerns about practicality and privacy. To address these limitations, this paper proposes a novel, compact spectroscopic sensing platform for automatic beverage recognition. The system utilizes the AS7265x commercial sensor to capture the spectral signature of beverages, combined with a K-Nearest Neighbors (KNN) machine learning algorithm for classification. The approach is designed for integration into everyday objects, such as smart glasses or cups, offering a noninvasive and user-friendly alternative to manual tracking. Through optimization of both the sensor configuration and KNN parameters, we identified a reduced set of four wavelengths that achieves over 96% classification accuracy across a diverse range of common beverages. This demonstrates the potential for embedding accurate, low-power, and cost-efficient sensors into Internet of Things (IoT) devices for real-time nutritional monitoring, reducing the need for user input while enhancing accessibility and usability. Full article
Show Figures

Graphical abstract

17 pages, 2798 KB  
Article
A Comprehensive LC–MS Metabolomics Assay for Quantitative Analysis of Serum and Plasma
by Lun Zhang, Jiamin Zheng, Mathew Johnson, Rupasri Mandal, Meryl Cruz, Miriam Martínez-Huélamo, Cristina Andres-Lacueva and David S. Wishart
Metabolites 2024, 14(11), 622; https://doi.org/10.3390/metabo14110622 - 14 Nov 2024
Cited by 18 | Viewed by 8364
Abstract
Background/Objectives: Targeted metabolomics is often criticized for the limited metabolite coverage that it offers. Indeed, most targeted assays developed or used by researchers measure fewer than 200 metabolites. In an effort to both expand the coverage and improve the accuracy of metabolite quantification [...] Read more.
Background/Objectives: Targeted metabolomics is often criticized for the limited metabolite coverage that it offers. Indeed, most targeted assays developed or used by researchers measure fewer than 200 metabolites. In an effort to both expand the coverage and improve the accuracy of metabolite quantification in targeted metabolomics, we decided to develop a comprehensive liquid chromatography–tandem mass spectrometry (LC–MS/MS) assay that could quantitatively measure more than 700 metabolites in serum or plasma. Methods: The developed assay makes use of chemical derivatization followed by reverse phase LC–MS/MS and/or direct flow injection MS (DFI–MS) in both positive and negative ionization modes to separate metabolites. Multiple reaction monitoring (MRM), in combination with isotopic standards and multi-point calibration curves, is used to detect and absolutely quantify the targeted metabolites. The assay has been adapted to a 96-well plate format to enable automated, high-throughput sample analysis. Results: The assay (called MEGA) is able to detect and quantify 721 metabolites in serum/plasma, covering 20 metabolite classes and many commonly used clinical biomarkers. The limits of detection were determined to range from 1.4 nM to 10 mM, recovery rates were from 80% to 120%, and quantitative precision was within 20%. LC–MS/MS metabolite concentrations of the NIST® SRM®1950 plasma standard were found to be within 15% of NMR quantified levels. The MEGA assay was further validated in a large dietary intervention study. Conclusions: The MEGA assay should make comprehensive quantitative metabolomics much more affordable, accessible, automatable, and applicable to large-scale clinical studies. Full article
(This article belongs to the Special Issue Method Development in Metabolomics and Exposomics)
Show Figures

Figure 1

8 pages, 1101 KB  
Article
Albinism and Blood Cell Profile: The Peculiar Case of Asinara Donkeys
by Maria Grazia Cappai, Alice Senes and Giovannantonio Pilo
Animals 2024, 14(18), 2641; https://doi.org/10.3390/ani14182641 - 11 Sep 2024
Cited by 1 | Viewed by 2525
Abstract
The complete blood cell count (CBC) was screened in a group of 15 donkeys, of which 8 were of Asinara breed (oculocutaneous albinism type 1, OCA1) and 7 of Sardo breed (gray coat). All donkeys were kept under same management and dietary conditions [...] Read more.
The complete blood cell count (CBC) was screened in a group of 15 donkeys, of which 8 were of Asinara breed (oculocutaneous albinism type 1, OCA1) and 7 of Sardo breed (gray coat). All donkeys were kept under same management and dietary conditions and underwent periodic health monitoring in the month of June 2024, at the peak of the positive photoperiod, at Mediterranean latitudes. One aliquot of whole blood, drawn from each individual into K2-EDTA containing tubes, was analyzed for the complete blood cell count through an automatic analyzer, within two hours of sampling. Data were analyzed and compared by one-way ANOVA, where the breed was an independent variable. All animals appeared clinically healthy, though mild eosinophilia was observed in Sardo donkeys. The red blood cell line showed peculiar traits for Asinara donkeys, which displayed significantly higher circulating red blood cell numbers than gray coat Sardo donkeys (RBC, 5.19 vs. 3.80 1012/mL ± 0.98 pooled-St. Dev, respectively; p = 0.017). RBCs also exhibited a smaller diameter and higher degree of anisocytosis in Asinara donkeys, along with lower hematocrit value, albeit within physiological ranges. Taken all together, such hematological profile depicts a peculiar trait of the red blood cell line in albino donkeys during the positive photoperiod. Full article
(This article belongs to the Special Issue Current Research on Donkeys and Mules)
Show Figures

Figure 1

16 pages, 1463 KB  
Article
Eating Event Recognition Using Accelerometer, Gyroscope, Piezoelectric, and Lung Volume Sensors
by Sigert J. Mevissen, Randy Klaassen, Bert-Jan F. van Beijnum and Juliet A. M. Haarman
Sensors 2024, 24(2), 571; https://doi.org/10.3390/s24020571 - 16 Jan 2024
Cited by 2 | Viewed by 2484
Abstract
In overcoming the worldwide problem of overweight and obesity, automatic dietary monitoring (ADM) is introduced as support in dieting practises. ADM aims to automatically, continuously, and objectively measure dimensions of food intake in a free-living environment. This could simplify the food registration process, [...] Read more.
In overcoming the worldwide problem of overweight and obesity, automatic dietary monitoring (ADM) is introduced as support in dieting practises. ADM aims to automatically, continuously, and objectively measure dimensions of food intake in a free-living environment. This could simplify the food registration process, thereby overcoming frequent memory, underestimation, and overestimation problems. In this study, an eating event detection sensor system was developed comprising a smartwatch worn on the wrist containing an accelerometer and gyroscope for eating gesture detection, a piezoelectric sensor worn on the jaw for chewing detection, and a respiratory inductance plethysmographic sensor consisting of two belts worn around the chest and abdomen for food swallowing detection. These sensors were combined to determine to what extent a combination of sensors focusing on different steps of the dietary cycle can improve eating event classification results. Six subjects participated in an experiment in a controlled setting consisting of both eating and non-eating events. Features were computed for each sensing measure to train a support vector machine model. This resulted in F1-scores of 0.82 for eating gestures, 0.94 for chewing food, and 0.58 for swallowing food. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

18 pages, 8533 KB  
Article
Bio-Inspired Spotted Hyena Optimizer with Deep Convolutional Neural Network-Based Automated Food Image Classification
by Hany Mahgoub, Ghadah Aldehim, Nabil Sharaf Almalki, Imène Issaoui, Ahmed Mahmud and Amani A. Alneil
Biomimetics 2023, 8(6), 493; https://doi.org/10.3390/biomimetics8060493 - 18 Oct 2023
Cited by 8 | Viewed by 2958
Abstract
Food image classification, an interesting subdomain of Computer Vision (CV) technology, focuses on the automatic classification of food items represented through images. This technology has gained immense attention in recent years thanks to its widespread applications spanning dietary monitoring and nutrition studies to [...] Read more.
Food image classification, an interesting subdomain of Computer Vision (CV) technology, focuses on the automatic classification of food items represented through images. This technology has gained immense attention in recent years thanks to its widespread applications spanning dietary monitoring and nutrition studies to restaurant recommendation systems. By leveraging the developments in Deep-Learning (DL) techniques, especially the Convolutional Neural Network (CNN), food image classification has been developed as an effective process for interacting with and understanding the nuances of the culinary world. The deep CNN-based automated food image classification method is a technology that utilizes DL approaches, particularly CNNs, for the automatic categorization and classification of the images of distinct kinds of foods. The current research article develops a Bio-Inspired Spotted Hyena Optimizer with a Deep Convolutional Neural Network-based Automated Food Image Classification (SHODCNN-FIC) approach. The main objective of the SHODCNN-FIC method is to recognize and classify food images into distinct types. The presented SHODCNN-FIC technique exploits the DL model with a hyperparameter tuning approach for the classification of food images. To accomplish this objective, the SHODCNN-FIC method exploits the DCNN-based Xception model to derive the feature vectors. Furthermore, the SHODCNN-FIC technique uses the SHO algorithm for optimal hyperparameter selection of the Xception model. The SHODCNN-FIC technique uses the Extreme Learning Machine (ELM) model for the detection and classification of food images. A detailed set of experiments was conducted to demonstrate the better food image classification performance of the proposed SHODCNN-FIC technique. The wide range of simulation outcomes confirmed the superior performance of the SHODCNN-FIC method over other DL models. Full article
(This article belongs to the Special Issue Bionic Artificial Neural Networks and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 626 KB  
Article
The Impact of Different Types of Rice and Cooking on Postprandial Glycemic Trends in Children with Type 1 Diabetes with or without Celiac Disease
by Antonio Colasanto, Silvia Savastio, Erica Pozzi, Carlotta Gorla, Jean Daniel Coïsson, Marco Arlorio and Ivana Rabbone
Nutrients 2023, 15(7), 1654; https://doi.org/10.3390/nu15071654 - 29 Mar 2023
Cited by 14 | Viewed by 5626
Abstract
The aims of this study were to evaluate: (i) the chemical and nutritional composition of rice before and after cooking and (ii) postprandial glycemic impacts in children and adolescents with type 1 diabetes (T1D) after eating two different types of rice (“Gigante Vercelli” [...] Read more.
The aims of this study were to evaluate: (i) the chemical and nutritional composition of rice before and after cooking and (ii) postprandial glycemic impacts in children and adolescents with type 1 diabetes (T1D) after eating two different types of rice (“Gigante Vercelli” white rice and “Artemide” black rice) or white rice cooked “risotto” style or boiled using an advanced hybrid closed loop (AHCL) system (Tandem Control-IQTM). General composition and spectrophotometric analyses of raw and cooked rice were performed. Eight T1D subjects (four males and four females, aged 11 ± 1.4 years), two with celiac disease (CD), using an AHCL system were enrolled. “Gigante Vercelli” white rice cooked as risotto or boiled and boiled “Artemide” rice were prepared by the same cook on two evenings. Continuous glucose monitoring metrics were evaluated for 12 h after meal consumption. Total dietary fiber was higher for both rice types after cooking compared with raw rice. Cooking as risotto increased polyphenols and antioxidants (p < 0.05) in both rice varieties, and total starch decreased after boiling (p < 0.05) in white rice. There was a significant peak in glycemia after consuming risotto and boiled white rice (p < 0.05), while the mean glycemic peak remained <180 mg/dL in individuals eating boiled Artemide rice. There were no significant differences in automatic basal or auto-bolus insulin deliveries by the AHCL according to different types of rice or cooking method. Our findings suggest that glycemic trends are impacted by the different chemical and nutritional profiles of rice but are nevertheless well controlled by AHCL systems. Full article
(This article belongs to the Special Issue Nutrition and Immunobiology of Celiac Disease)
Show Figures

Figure 1

23 pages, 796 KB  
Review
Precision Livestock Farming Applications (PLF) for Grazing Animals
by Christos Tzanidakis, Ouranios Tzamaloukas, Panagiotis Simitzis and Panagiotis Panagakis
Agriculture 2023, 13(2), 288; https://doi.org/10.3390/agriculture13020288 - 25 Jan 2023
Cited by 88 | Viewed by 14130
Abstract
Over the past four decades the dietary needs of the global population have been elevated, with increased consumption of animal products predominately due to the advancing economies of South America and Asia. As a result, livestock production systems have expanded in size, with [...] Read more.
Over the past four decades the dietary needs of the global population have been elevated, with increased consumption of animal products predominately due to the advancing economies of South America and Asia. As a result, livestock production systems have expanded in size, with considerable changes to the animals’ management. As grazing animals are commonly grown in herds, economic and labour constraints limit the ability of the producer to individually assess every animal. Precision Livestock Farming refers to the real-time continuous monitoring and control systems using sensors and computer algorithms for early problem detection, while simultaneously increasing producer awareness concerning individual animal needs. These technologies include automatic weighing systems, Radio Frequency Identification (RFID) sensors for individual animal detection and behaviour monitoring, body temperature monitoring, geographic information systems (GIS) for pasture evaluation and optimization, unmanned aerial vehicles (UAVs) for herd management, and virtual fencing for herd and grazing management. Although some commercial products are available, mainly for cattle, the adoption of these systems is limited due to economic and cultural constraints and poor technological infrastructure. This review presents and discusses PLF applications and systems for grazing animals and proposes future research and strategies to improve PLF adoption and utilization in today’s extensive livestock systems. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

16 pages, 2118 KB  
Article
Dietary Nutritional Information Autonomous Perception Method Based on Machine Vision in Smart Homes
by Hongyang Li and Guanci Yang
Entropy 2022, 24(7), 868; https://doi.org/10.3390/e24070868 - 24 Jun 2022
Cited by 11 | Viewed by 3203
Abstract
In order to automatically perceive the user’s dietary nutritional information in the smart home environment, this paper proposes a dietary nutritional information autonomous perception method based on machine vision in smart homes. Firstly, we proposed a food-recognition algorithm based on YOLOv5 to monitor [...] Read more.
In order to automatically perceive the user’s dietary nutritional information in the smart home environment, this paper proposes a dietary nutritional information autonomous perception method based on machine vision in smart homes. Firstly, we proposed a food-recognition algorithm based on YOLOv5 to monitor the user’s dietary intake using the social robot. Secondly, in order to obtain the nutritional composition of the user’s dietary intake, we calibrated the weight of food ingredients and designed the method for the calculation of food nutritional composition; then, we proposed a dietary nutritional information autonomous perception method based on machine vision (DNPM) that supports the quantitative analysis of nutritional composition. Finally, the proposed algorithm was tested on the self-expanded dataset CFNet-34 based on the Chinese food dataset ChineseFoodNet. The test results show that the average recognition accuracy of the food-recognition algorithm based on YOLOv5 is 89.7%, showing good accuracy and robustness. According to the performance test results of the dietary nutritional information autonomous perception system in smart homes, the average nutritional composition perception accuracy of the system was 90.1%, the response time was less than 6 ms, and the speed was higher than 18 fps, showing excellent robustness and nutritional composition perception performance. Full article
(This article belongs to the Special Issue Information Theory-Based Deep Learning Tools for Computer Vision)
Show Figures

Figure 1

26 pages, 1260 KB  
Article
Estimating Dietary Intake from Grocery Shopping Data—A Comparative Validation of Relevant Indicators in Switzerland
by Jing Wu, Klaus Fuchs, Jie Lian, Mirella Lindsay Haldimann, Tanja Schneider, Simon Mayer, Jaewook Byun, Roland Gassmann, Christine Brombach and Elgar Fleisch
Nutrients 2022, 14(1), 159; https://doi.org/10.3390/nu14010159 - 29 Dec 2021
Cited by 20 | Viewed by 6847
Abstract
In light of the globally increasing prevalence of diet-related chronic diseases, new scalable and non-invasive dietary monitoring techniques are urgently needed. Automatically collected digital receipts from loyalty cards hereby promise to serve as an objective and automatically traceable digital marker for individual food [...] Read more.
In light of the globally increasing prevalence of diet-related chronic diseases, new scalable and non-invasive dietary monitoring techniques are urgently needed. Automatically collected digital receipts from loyalty cards hereby promise to serve as an objective and automatically traceable digital marker for individual food choice behavior and do not require users to manually log individual meal items. With the introduction of the General Data Privacy Regulation in the European Union, millions of consumers gained the right to access their shopping data in a machine-readable form, representing a historic chance to leverage shopping data for scalable monitoring of food choices. Multiple quantitative indicators for evaluating the nutritional quality of food shopping have been suggested, but so far, no comparison has validated the potential of these alternative indicators within a comparative setting. This manuscript thus represents the first study to compare the calibration capacity and to validate the discrimination potential of previously suggested food shopping quality indicators for the nutritional quality of shopped groceries, including the Food Standards Agency Nutrient Profiling System Dietary Index (FSA-NPS DI), Grocery Purchase Quality Index-2016 (GPQI), Healthy Eating Index-2015 (HEI-2015), Healthy Trolley Index (HETI) and Healthy Purchase Index (HPI), checking if any of them performs differently from the others. The hypothesis is that some food shopping quality indicators outperform the others in calibrating and discriminating individual actual dietary intake. To assess the indicators’ potentials, 89 eligible participants completed a validated food frequency questionnaire (FFQ) and donated their digital receipts from the loyalty card programs of the two leading Swiss grocery retailers, which represent 70% of the national grocery market. Compared to absolute food and nutrient intake, correlations between density-based relative food and nutrient intake and food shopping data are stronger. The FSA-NPS DI has the best calibration and discrimination performance in classifying participants’ consumption of nutrients and food groups, and seems to be a superior indicator to estimate nutritional quality of a user’s diet based on digital receipts from grocery shopping in Switzerland. Full article
Show Figures

Figure 1

37 pages, 7316 KB  
Review
A Comprehensive Survey of Image-Based Food Recognition and Volume Estimation Methods for Dietary Assessment
by Ghalib Ahmed Tahir and Chu Kiong Loo
Healthcare 2021, 9(12), 1676; https://doi.org/10.3390/healthcare9121676 - 3 Dec 2021
Cited by 90 | Viewed by 12890
Abstract
Dietary studies showed that dietary problems such as obesity are associated with other chronic diseases, including hypertension, irregular blood sugar levels, and increased risk of heart attacks. The primary cause of these problems is poor lifestyle choices and unhealthy dietary habits, which are [...] Read more.
Dietary studies showed that dietary problems such as obesity are associated with other chronic diseases, including hypertension, irregular blood sugar levels, and increased risk of heart attacks. The primary cause of these problems is poor lifestyle choices and unhealthy dietary habits, which are manageable using interactive mHealth apps. However, traditional dietary monitoring systems using manual food logging suffer from imprecision, underreporting, time consumption, and low adherence. Recent dietary monitoring systems tackle these challenges by automatic assessment of dietary intake through machine learning methods. This survey discusses the best-performing methodologies that have been developed so far for automatic food recognition and volume estimation. Firstly, the paper presented the rationale of visual-based methods for food recognition. Then, the core of the study is the presentation, discussion, and evaluation of these methods based on popular food image databases. In this context, this study discusses the mobile applications that are implementing these methods for automatic food logging. Our findings indicate that around 66.7% of surveyed studies use visual features from deep neural networks for food recognition. Similarly, all surveyed studies employed a variant of convolutional neural networks (CNN) for ingredient recognition due to recent research interest. Finally, this survey ends with a discussion of potential applications of food image analysis, existing research gaps, and open issues of this research area. Learning from unlabeled image datasets in an unsupervised manner, catastrophic forgetting during continual learning, and improving model transparency using explainable AI are potential areas of interest for future studies. Full article
Show Figures

Figure 1

9 pages, 594 KB  
Article
Dietary Changes during the COVID-19 Pandemic: A Longitudinal Study Using Objective Sequential Diet Records from an Electronic Purchase System in a Workplace Cafeteria in Japan
by Mieko Nakamura, Yoshiro Shirai and Masae Sakuma
Nutrients 2021, 13(5), 1606; https://doi.org/10.3390/nu13051606 - 11 May 2021
Cited by 9 | Viewed by 4450
Abstract
As a result of the coronavirus disease 2019 (COVID-19) pandemic-related restrictions, food systems have undergone unprecedented changes, with the potential to affect dietary behavior. We aimed to investigate workers’ dietary changes resulting from the introduction of regulations to combat COVID-19 in a Japanese [...] Read more.
As a result of the coronavirus disease 2019 (COVID-19) pandemic-related restrictions, food systems have undergone unprecedented changes, with the potential to affect dietary behavior. We aimed to investigate workers’ dietary changes resulting from the introduction of regulations to combat COVID-19 in a Japanese factory cafeteria. Objective data on daytime dietary intake were automatically collected from electronic purchase system records. The dataset included the weekly data of 890 men from 1 July 2019 to 30 September 2020. The cafeteria regulations came into effect on 10 April 2020; in this context, the purchase of dishes and estimated dietary intake were monitored. The number of cafeteria visits decreased slightly after the introduction of the regulations. The purchase of main and side dishes also decreased, but the purchase of grain dishes was less affected. When compared with summer 2019 (pre-pandemic, no regulations: 1 July to 29 September 2019), in summer 2020 (during the pandemic and with regulations: 29 June to 30 September 2020), the estimated mean grain, meat, fish, and total energy intake was stable; however, vegetable intake decreased by 11%. As the COVID-19 pandemic continues, workplace cafeteria regulations need to be monitored to avoid unfavorable dietary changes in employees. Full article
(This article belongs to the Special Issue Nutrition within and beyond Corona Virus)
Show Figures

Figure 1

26 pages, 9722 KB  
Article
DynDSE: Automated Multi-Objective Design Space Exploration for Context-Adaptive Wearable IoT Edge Devices
by Giovanni Schiboni, Juan Carlos Suarez, Rui Zhang and Oliver Amft
Sensors 2020, 20(21), 6104; https://doi.org/10.3390/s20216104 - 27 Oct 2020
Cited by 4 | Viewed by 3262 | Correction
Abstract
We describe a simulation-based Design Space Exploration procedure (DynDSE) for wearable IoT edge devices that retrieve events from streaming sensor data using context-adaptive pattern recognition algorithms. We provide a formal characterisation of the design space, given a set of system functionalities, components and [...] Read more.
We describe a simulation-based Design Space Exploration procedure (DynDSE) for wearable IoT edge devices that retrieve events from streaming sensor data using context-adaptive pattern recognition algorithms. We provide a formal characterisation of the design space, given a set of system functionalities, components and their parameters. An iterative search evaluates configurations according to a set of requirements in simulations with actual sensor data. The inherent trade-offs embedded in conflicting metrics are explored to find an optimal configuration given the application-specific conditions. Our metrics include retrieval performance, execution time, energy consumption, memory demand, and communication latency. We report a case study for the design of electromyographic-monitoring eyeglasses with applications in automatic dietary monitoring. The design space included two spotting algorithms, and two sampling algorithms, intended for real-time execution on three microcontrollers. DynDSE yielded configurations that balance retrieval performance and resource consumption with an F1 score above 80% at an energy consumption that was 70% below the default, non-optimised configuration. We expect that the DynDSE approach can be applied to find suitable wearable IoT system designs in a variety of sensor-based applications. Full article
Show Figures

Figure 1

18 pages, 3455 KB  
Article
Food Intake Actions Detection: An Improved Algorithm Toward Real-Time Analysis
by Ennio Gambi, Manola Ricciuti and Adelmo De Santis
J. Imaging 2020, 6(3), 12; https://doi.org/10.3390/jimaging6030012 - 17 Mar 2020
Cited by 4 | Viewed by 4330
Abstract
With the increase in life expectancy, one of the most important topic for scientific research, especially for the elderly, is good nutrition. In particular, with an advanced age and health issues because disorders such as Alzheimer and dementia, monitoring the subjects’ dietary habits [...] Read more.
With the increase in life expectancy, one of the most important topic for scientific research, especially for the elderly, is good nutrition. In particular, with an advanced age and health issues because disorders such as Alzheimer and dementia, monitoring the subjects’ dietary habits to avoid excessive or poor nutrition is a critical role. Starting from an application aiming to monitor the food intake actions of people during a meal, already shown in a previously published paper, the present work describes some improvements that are able to make the application work in real time. The considered solution exploits the Kinect v1 device that can be installed on the ceiling, in a top-down view in an effort to preserve privacy of the subjects. The food intake actions are estimated from the analysis of depth frames. The innovations introduced in this document are related to the automatic identification of the initial and final frame for the detection of food intake actions, and to the strong revision of the procedure to identify food intake actions with respect to the original work, in order to optimize the performance of the algorithm. Evaluation of the computational effort and system performance compared to the previous version of the application has demonstrated a possible real-time applicability of the solution presented in this document. Full article
(This article belongs to the Special Issue Image/Video Processing and Coding)
Show Figures

Figure 1

Back to TopTop