Next Article in Journal
Quality Green Tea (Camellia sinensis L.) Clones Marked through Novel Traits
Previous Article in Journal
Sugary Kefir: Microbial Identification and Biotechnological Properties
Review

Emerging Technologies Based on Artificial Intelligence to Assess the Quality and Consumer Preference of Beverages

1
School of Agriculture and Food, Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Parkville, VIC 3010, Australia
2
Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, 7647 Lincoln, New Zealand
*
Author to whom correspondence should be addressed.
Beverages 2019, 5(4), 62; https://doi.org/10.3390/beverages5040062
Received: 23 August 2019 / Revised: 9 October 2019 / Accepted: 10 October 2019 / Published: 1 November 2019

Abstract

Beverages is a broad and important category within the food industry, which is comprised of a wide range of sub-categories and types of drinks with different levels of complexity for their manufacturing and quality assessment. Traditional methods to evaluate the quality traits of beverages consist of tedious, time-consuming, and costly techniques, which do not allow researchers to procure results in real-time. Therefore, there is a need to test and implement emerging technologies in order to automate and facilitate those analyses within this industry. This paper aimed to present the most recent publications and trends regarding the use of low-cost, reliable, and accurate, remote or non-contact techniques using robotics, machine learning, computer vision, biometrics and the application of artificial intelligence, as well as to identify the research gaps within the beverage industry. It was found that there is a wide opportunity in the development and use of robotics and biometrics for all types of beverages, but especially for hot and non-alcoholic drinks. Furthermore, there is a lack of knowledge and clarity within the industry, and research about the concepts of artificial intelligence and machine learning, as well as that concerning the correct design and interpretation of modeling related to the lack of inclusion of relevant data, additional to presenting over- or under-fitted models.
Keywords: robotics; machine learning; computer vision; biometrics; artificial intelligence robotics; machine learning; computer vision; biometrics; artificial intelligence

1. Introduction

Beverages are often classified into three main categories: (i) Alcoholic drinks, (ii) hot drinks, and (iii) non-alcoholic drinks. Alcoholic drinks (i) refer to those beverages that are composed of a minimum alcohol content, which varies according to the regulations from each country; examples from this category include beer, wine, and spirits [1]. The hot drinks group (ii) is mainly composed of coffee, tea and hot chocolate [2]. On the other hand, non-alcoholic drinks (iii) includes juices, still and carbonated water, milk, and soft drinks, which are those that do not belong to any of the other categories and that usually contain sweeteners, acids, flavoring agents and/or carbonation, such as carbonated beverages, energy, and sports drinks, among others [3,4]. Quality assessment is critical to meet consumers’ demands and to offer products with high standards that comply with regulations. For all types of beverages, the quality traits to control may vary due to the specific characteristics of each product. However, in general, parameters such as color, clarity, tastes, aromas, flavors, pH, viscosity and density are standard quality indicators [5,6].
In carbonated drinks, other descriptors such as bubble and foam-related parameters are among the most important [7,8], while proteins or amino-acids and carbohydrates must be considered in beverages such as wine, beer, coffee, tea, hot chocolate and juices [9,10,11].
Traditional methods to assess quality in beverages involve tedious and time-consuming laboratory analysis, which usually requires several expensive apparatuses and the use of consumables and skilled personnel to operate them. These techniques generally involve equipment such as gas chromatography (GC), high-performance liquid chromatography (HPLC), spectrometers for UV-visible and near-infrared, colorimeters and viscometers, among others [12]. Furthermore, sensory evaluation is used to assess the intensities of organoleptic descriptors, such as flavor, aroma, mouthfeel, and tastes, among others. This type of sensory test requires a fixed panel of eight to sixteen participants with several training sessions for the specific product, which leads to costly and time-consuming methods, including conducting the sessions, data handling and statistical analysis [13]. Some industries (i.e., brewing), rely on one or two people, such as the master brewer, to assess the sensory quality of the product; however, this is not an objective, accurate nor reliable method, as it does not follow any structured and quantitative method, and does not involve any statistical analysis [14]. Another type of sensory test is the acceptability, which consists of gathering a minimum of 30–100 consumers, depending on the number of samples to evaluate and the target mean values expected for liking results. This, along with the extended questionnaires typically presented to participants, make the techniques time-consuming and subjective, as they rely on the consumers responses that may be biased, due to many factors such as cultural effects; for example, Asians have shown a trend to be polite and avoid using the extremes of the scales [15,16]. Therefore, the use of biometrics to assess consumers’ subconscious responses by tapping into the autonomic nervous system has been implemented, obtaining interesting results, which will also be reviewed in this paper [17,18].
Some novel methods, which involve the use of automated analysis techniques coupled with artificial intelligence (AI), such as computer vision (CV), have been developed to assess some of those quality parameters in food and beverages. These are more cost-effective and less time-consuming methods, which allow having more objective, accurate, consistent, and reliable results [19]. These techniques involve the use of different data sources, such as images or videos, which are then processed using computer-based algorithms, which may be fully automated by coupling it with the use of robotics, and integrated with the use of machine learning (ML) modeling to predict the quality of the final product based on different measured parameters. Those methods may also consist of the use of multisensor systems or arrays of sensors, which are usually integrated with other AI techniques to assess specific quality parameters, such as aromas, tastes, defects, and processing conditions, among others. This paper aimed to present the most recent developments in low-cost, reliable, and accurate remote or non-contact techniques, which involve the use of computer vision, robotics, machine learning, and biometric methods to assess quality in the beverage industry to date. The contact sensor technology was reviewed only for those techniques in which they are coupled with any of the aforementioned technologies, and for comparison purposes when appropriate.

2. Robotics

Robots refer to machines that can perform tasks or operations by themselves after being programmed using a computer [20]. Those tasks may be either simple and repetitive, or adaptive and more complex, in which the latter requires the integration of other AI methods, such as CV and ML, to continually retrain and learn to carry out more advanced operations [21]. The use of robotics in the food and beverage industries has increased, as they have the advantage of being reliable, not getting tired or bored, and tasks may be done in less time with high accuracy and precision [22]. Within the main applications of robotics in the beverage industry are packaging, palletizing, pick and place, and production [23]. However, some companies and research entities have coupled them with AI divisions for more automated applications such as inspection, quality control, and as pourers or dispensing machines for beverages, which are the most common, among others [24].

2.1. Robotics in Alcoholic Beverages

The category of alcoholic beverages has been the most explored in terms of robotics development for product development, production, and quality assessment. A robotic cocktail mixer named Makr Shakr has been developed using AI, and it works using a mobile application in which the user is able to design their cocktail drink from a selection of 60 different spirits, and them it is prepared by two robotic arms [25]. For the wine production industry, robots have been developed to carry out the bottling and packaging tasks, which consists of a Model 94 decaser (A-B-C Packaging Machine Corporation, Tarpon Springs, FL, USA), which is able to pick the wine bottles and place them into a conveyor that transports them to the cleaning area, and this is then followed by filling, corking or capping, all in an automated process [26]. Conde et al. [27] developed a robot (FIZZeyeRobot, The University of Melbourne, Melbourne, Victoria, Australia) to uniform sparkling wine pouring, and this robot is able to serve around 50 mL up to three times from the same bottle; it works with Arduino® boards (Arduino Computing platform, Ivrea, Italy) and servo motors, and may be calibrated by modifying the angle of the bottle position and the delay times. This pourer was designed to control the serving volume, and to further use CV to measure foam and bubble-related parameters; this will be further explained later in this review.
Regarding beer, a robotic beer dispenser has been developed using two robotic arms; one is programmed to hold the glass, while the other is able to control the tap; both are controlled using RobotStudio® (ABB Robotics, Zürich, Switzerland) [28]. Yasui et al. [29] developed an automatic pourer, which consists of a pivot point at the neck of the beer bottle and motion; however, the authors did not specify the type of motors and controllers used; this machine may hold and serve several bottles simultaneously. The purpose of this robot is to control pouring to measure foam collapse time; therefore, the height of the bottle is also controllable. More recently, a robotic pourer named RoboBEER (The University of Melbourne, Melbourne, VIC, Australia) was developed using LEGO® blocks and servo motors (The Lego Group, Billund, Denmark), and this is coupled with infrared temperature, carbon dioxide, and ethanol gas ubiquitous sensors controlled with Arduino® boards. This robot consists of a glass chamber, a bottle holder and a pivot in the bottle neck, and may be adapted for any bottle shapes and height to pour 80 ± 10 mL; this, coupled with CV and ML, is able to predict beer quality based on color and foam-related parameters, which will be explained later in this paper [7,9,14,17,30,31,32].

2.2. Robotics in Hot Beverages

In tea and coffee, robots have been developed for brewing and dispensing purposes. Kayaalp et al. [33] designed a tea brewing machine, which is able to make cups of tea in specific times with adequate water temperature and record the consumption patterns; this system works with Arduino® boards and Wi-Fi connectivity. Another recent development was a robot named Teforia; it consists of an in-home tea maker in which the user is able to add any combination of tea leaves and water, then the robot is controlled using a smartphone application to start the process; it claims to be able to brew the beverage to achieve the optimal flavor profile for each consumer [34]. TeaBOT is another tea brewing robot capable of making a cup of tea in 30 s; it is pre-loaded with a selection of leaves, and the user is able to choose in a tablet the combination and proportion to create a personalized cup of tea [35].
In the case of coffee, a robotic coffee maker, Mugsy, was developed using Raspberry Pi (Raspberry Pi Foundation, Cambridge, UK), and it is possible to integrate it with different applications such as text messages, Twitter or an Alexa device (Amazon, Bellevue, WA, USA), which are used to indicate to the robot to start brewing the coffee. Mugsy is capable of grinding coffee beans, controlling the water temperature and pouring the brew into a cup, and is able to learn and personalize the drink for specific users [36]. Furthermore, a kiosk, CafeX, was created; it consists of two coffee makers and a six-axis robotic arm. The user is able to choose the type of brew from a tablet, and the robot is able to perform the tasks of a barista in a shorter time and with more precision [37].

2.3. Robotics in Non-Alcoholic Beverages

Some recent developments have been made related to robotics in non-alcoholic beverages; Do and Burgard [38] used a commercial robotic pourer PR2 (Willow Garage, Palo Alto, CA, USA), which consists of two arms and a screen; the authors integrated it with a camera, which is able to detect the level of the liquid to predict when to stop serving. Morita et al. [39] developed a teahouse, which consists of a motion-sensing device to count the number of people entering the establishment; they are then directed to the counter, where a microphone with speech recognition was implemented to take the order. A robotic arm was designed to pick a cup and serve the beverage ordered (orange juice, apple juice or iced tea), a second robot was programmed to pick the bottled soft drinks and take them to the clients’ table, while a third machine was intended to have a conversation with clients while waiting for the order. Cai et al. [40] developed a robotic beverage maker, which consists of a screen for the user to select the desired drink. The device has an arm to hold the container and is able to rotate and find the raw material according to the selection made, which may be tea, ice, and/or syrup, followed by the shaking step to pour it into a glass. More recently, a robotic machine was developed to make smoothies; it was designed as a vending machine, and users are able to download an application to choose the recipe according to their preferences, then a Quick Response (QR) Code is generated and read by the apparatus to make the desired beverage [41].

3. Computer Vision Techniques

The CV technique refers to a subdivision of AI, which consists of automatic information extraction from either images or videos by imitating the human eye functions [42]. It can be coupled with robotics, specific equations or algorithms, basic statistics, and ML algorithms, to fully automate the technique as an AI system; this may allow the procedure to be stand-alone and to classify or predict the quality parameters of the product. Some advantages include that it is non-destructive, non-contact, may be replicated, is automatic, and therefore, considered as a rapid method, which is more accurate and reliable than some traditional procedures such as visual inspection and sensory analysis, which include human error as a possible drawback [19,43].
The procedure of CV consists of three main steps to follow for the image or video: (i) Acquisition, (ii) pre-processing, and (iii) analysis and interpretation. For (i) acquisition, the equipment needed consists of a camera or scanner and a constant and uniform lighting source [44]. For (ii) pre-processing and (iii) analysis and interpretation, the use of a computer and software with image analysis capabilities such as Matlab® (MathWorks Inc., Natick, MA, USA), ImageJ® (U. S. National Institutes of Health, Bethesda, MD, USA) or Image-Pro Plus (Media Cybernetics, Inc., Rockville, MD, USA) is required (Figure 1) [45,46]. Specific algorithms need to be developed according to the type of visual assessment and the parameters to evaluate. These may consist of image enhancement, segmentation, recognition, and interpretation [47]. Enhancement refers to the optimization of the images to improve their quality; it may include sharpening, contrast adjustment, and denoising, among others [48]. Segmentation is the division of the images or frames in specific regions of interest. Recognition usually consists of specific mathematical equations or integration of other algorithms to define or detect the object or area of interest, while interpretation involves the use of statistical analysis, which may include machine learning to classify the product into different categories or predict more specific information [49]. Methods involving computer vision have been developed to be used in different industries such as medical, marketing, psychology, agriculture, food, and beverages, among others. This technology has been used for different applications, which include an assessment of hand hygiene [42], face recognition and tracking [18], object [50] and text recognition [51], and color analysis [7,43,50], among others. The food industry is among the top industries with the fastest growth in the automation of quality assessment using machine or CV [52].

3.1. Computer Vision in Alcoholic Beverages

There have not been many CV methods developed for spirits. However, in beverages such as cachaça, which is a Brazilian distilled drink made from sugarcane, a method to assess color from image analysis was developed using a lighting source below the glass as a background and a digital camera placed above the glass, obtaining results in both RGB and CIELab color scales [53]. Pessoa et al. [54] developed a method based on color assessment to determine copper in sugarcane spirits using images captured with a digital camera with a Bayer RGB mosaic filter, which consists of a grid of color filters and photosensors, and a charge-coupled device, apart from an illuminated black box, and a porcelain plaque. Wang et al. [55] presented a technique to detect foreign matter in Chinese health wine to assess the safety and quality of the final product at the end of the production line.
Several methods using CV have been developed to evaluate different quality parameters in wine. Martin et al. [56] analyzed the color of different wines using a digital camera and DigiEye system (VeriVide Ltd., Leicester, UK) and evaluated the samples poured into a Petri dish and a cocktail glass at different depths. The authors found the best results with the samples in a cocktail glass and obtained a high correlation (r > 0.90) between the instrumental color measurements (lightness, colorfulness, and hue composition) and the non-contact assessment using the DigiEye system. In another study, a low-cost method using a smartphone was developed to assess the browning process of sparkling wines, and this was achieved by using a black box with a diffuse lightbox below a 96-well plate to evaluate several samples simultaneously. The software used to assess color in RBG scale through image analysis was ImageJ® [57]. Arakawa et al. [58] developed a sniffer camera to assess ethanol in wine by placing an enzyme mesh substrate over the glass and capturing images using a charge-coupled device camera.
In sparkling wine, several methods to assess bubbles and foam-related parameters have been developed. Cilindre et al. [59] applied an automated method called Computerized Assisted Viewing Equipment (CAVE), which used three video cameras to record the wine during pouring to assess parameters such as serving time, height of foam, foam velocity and foam thickness, among others.
Conde et al. [27] used the FIZZeyeRobot to pour sparkling wine samples and analyzed 12 parameters, such as foam stability, volume of foam, bubble size, collar and foam drainage, among others, using a smartphone to capture videos of the pouring, and analyzing them using customized codes developed in Matlab®. More recently, Crumpton et al. [60] developed the free pour technique by manually pouring sparkling wine samples and recording them using one camera on the top and one on the side of the glass; videos were post-processed using ImageJ® software to assess collar width and foam height and stability.
Beer is the alcoholic beverage for which more methods have been developed using CV techniques, as the main quality traits of this type of drink are the color, bubbles, and foam-related parameters. Silva et al. [61] recorded images of pale lager samples poured into Petri dishes using a desk scanner and analyzed with a Scilab software (Scilab Enterprises, Rungis, France) to get color in the RGB scale of each beer sample. Fengxia et al. [62] used a Microtek ScanMaker E6 scanner (Microtek Corp., Hsinchu, Taiwan) and a Vcam charge-coupled device camera (Ame Corp., Hsinchu, Taiwan) to capture images of beer samples; they measured the color in the RGB scale and calculated the saturation value to further obtain the results in the European Brewery Convention (EBC) scale. Bubble haze refers to a large number of micro-bubbles formed when the beer is poured, and that circulate the liquid before reaching the surface. Based on the latter, Hepworth et al. [63] developed a method to measure the surge time, rise velocity of haze and bubble size using a charge-coupled device camera and the Matrix Vision image processing software (Matrix Vision GmbH, Oppenweiller, Germany). In another study, Hepworth et al. [64] developed a CV method to measure the bubble size distribution using a charge-coupled device camera with a chip able to capture pixels and convert into images to be further analyzed using Image-Pro Plus 4.1 software.
Different methods to analyze foam-related parameters in beer have been developed, such as foam collapse time or stability, which consists of the video recording of beer pouring using a charge-coupled device camera, and analyzed using computer software whose name was not specified by the authors [29]. However, most of those techniques require the use of a charge-coupled device camera and specialized equipment such as a nozzle, water jackets, and a chip attached to the camera, among others; these make them less affordable and available to other users. Therefore, other methods, such as the low-cost image analysis Rudin method developed by Cimini et al. [65] require the use of an affordable Raspberry Pi computer and camera module, but it is only able to measure the foam half-time. A newly developed method using Matlab® algorithms to assess beer color and foam-related parameters were presented by Gonzalez Viejo et al. [7]. In this study, 5-min videos of samples from the three different types of fermentation (top, bottom and spontaneous) were recorded during automated pouring using the RoboBEER and a smartphone camera. Videos were further analyzed obtaining 13 parameters such as color in CIELab and RGB scales, bubble size distribution (small, medium, large), foam drainage, lifetime of foam, total lifetime of foam and maximum volume of foam using computer vision techniques, plus two parameters (ethanol gas and carbon dioxide) obtained from ubiquitous sensors attached to Arduino® boards.

3.2. Computer Vision in Hot Beverages

Some CV techniques in tea and coffee have been developed; however, there is still a broad area within hot beverages that may be explored to automate and ease their quality assessment. Image analysis has been implemented in tea, especially to evaluate color and texture in leaves during or after fermentation in order to assess the quality of the primary ingredient of the beverage. Dong et al. [66] evaluated the color of green tea leaves using a single-lens reflex camera and uniform lighting, which were then analyzed in a computer to obtain RGB, HSV, CIELab, and gray color scales. Singh and Kamal [67] analyzed the color and size of fermented tea grains to calculate the tea quality index (TQI; Equation (1)) using a charge-coupled device camera with uniform lighting to acquire the images, and these were processed using gray image histogram extraction, image enhancement, zoom, thresholding and segmentation steps.
( T Q I =   A r e a + P e r i m e t e r + D i a m e t e r + R + G + B )
where TQI = tea quality index, Area, Perimeter, and Diameter of the tea grains, R = red, G = green, and B = blue.
On the other hand, Kumar et al. [68] used a charge-coupled device camera to evaluate the color of tea liquor in the RGB scale and transformed it to CIELab, as this is considered as closer to human color perception. In a more recent study, Akuli et al. [69] developed a method to assess color in tea liquor or infused tea using a black box with uniform illumination, and a low-cost camera positioned 30 cm above the sample, and thus they obtained color in both RBG and CIELab scales.
Similar to tea, most of the CV methods developed for coffee have been to analyze coffee beans’ quality parameters, and are more focused on the color assessment. Oblitas and Castro [70] designed a system to evaluate the color of roasted coffee beans by placing the sample in a Petri dish inside a chamber with a uniform lighting source, and recording it with a video camera to further process it using an algorithm written in Matlab®, the obtaining values in the CIELab color scale. Várvölgyi et al. [71] presented another technique to measure color using a charge-coupled device camera, 12 halogen lights, and a metal holder for the sample to obtain the hue of roasted coffee beans; however, the authors did not mention the software used to analyze the images. In a more recent study, Morais de Oliveira et al. [72] reported a CV system to evaluate green Arabica coffee beans using a black chamber composed of white lighting, and a digital camera, which was fixed 40 cm above the sample. Images were converted to the tagged image file format (TIFF) using the Digital Photo Professional® software (Canon Inc., Ōta, Tokyo, Japan) and analyzed using ImageJ® by cropping the images and measuring color in RGB and CIELab scales. Chu et al. [73] used hyperspectral imaging to obtain the near-infrared spectra of coffee beans within the 874–1734 nm range by building a device consisting of a conveyor belt with a motor, uniform illumination and a charge-coupled device camera with a spectrograph and a lens placed 32 cm above the sample. Images were acquired with the Spectral-Cube software (Isuzu Optics Corp., Hsinchu, Taiwan) and processed using the ImSpector N17E (Spectral Imaging Ltd., Oulu, Finland). On the other hand, Piazza et al. [74] assessed the foamability of brewed coffee made with 70% Arabica and 30% Robusta; the beverage was poured in a Plexiglas vessel and stirred to form foam, then images were scanned at different times (40–1860 s) and processed using Image-Pro Plus 5.0 software. Another method to analyze foamability and foam stability in espresso brews was proposed by Buratti et al. [75], who percolated the samples in a Plexiglas vessel with two cool light bulbs in the top, and acquired images using a digital camera. Images were recorded every 30 s for 5 min and evaluated using Image-Pro Plus 6.2 software by detecting the area of interest, correction, and conversion of measurements from pixels to mm through spatial calibration.

3.3. Computer Vision in Non-Alcoholic Beverages

Some CV methods have been implemented to assess the quality traits of non-alcoholic drinks. However, more research needs to be done to develop more of these non-destructive and rapid techniques for this beverage category. Damasceno et al. [76] published an image-based method to assess total hardness and alkalinity of still water using an enzyme-linked immunosorbent assay (ELISA) plate and a scanner to obtain the images. They measured different concentrations of standards for both hardness (Ca2+ and Mg2+) and alkalinity (buffer solution pH 10, alkali), and samples of drinking water from the tap, fountain and bottle, and these were analyzed for color changes using Matlab®. Barker et al. [77] evaluated bubble growth in carbonated water using a charge-coupled device video camera, which recorded at 20 frames per second, and a controlled light source; images were analyzed using Image-Pro Plus software. More recently, a CV method was developed to measure bubble diameter in pixels and bubble size distribution (small, medium, and large) in carbonated water using an algorithm developed in Matlab® [78].
In soft drinks, image analysis methods have been developed to assess the concentration of dyes, specifically for yellow sunset, also known as yellow 6 or E110, which is derived from petroleum and has been reported to trigger some side effects such as allergies, headache, and hyperactivity, among others [79]. Botelho et al. [80], used a scanner to obtain images of degassed orange sodas and isotonic drinks with yellow sunset dye placed in a Petri dish and selected and analyzed the center part of the container using Matlab® software to measure color in the RGB scale. Similarly, Sorouraddin et al. [81] used a scanner to acquire images of orange soft drinks, and these were analyzed with Photoshop CS5 (Adobe, San Jose, CA, USA) and Matlab® to calculate the RGB color values. Hosseininia et al. [82] designed a system that consisted of a matte black box, a charge-coupled device camera placed 30 cm away from the soft drink samples, and illumination using two fluorescent lamps for uniform lighting. Images were processed using the ImageJ 1.45 software, in which the center of the sample area was cropped and analyzed for color in CIELab, hue and chroma scales. On the other hand, image color methods have also been developed to assess color in orange juice using the DigiEye system; samples were recorded in transparent plastic bottles with a white background. The authors used the DigiFood® software [83] to convert color from RGB to CIELab and found a high and significant correlation (r = 0.93; p < 0.05) with the color intensity evaluated by a trained sensory panel [84,85].
The application of CV in milk has been different from other non-alcoholic beverages, as they target specific measurements. Velez-Ruiz and Barbosa-Canovas [86] analyzed images of milk obtained from a scanning electron microscope, and these were analyzed using the NIH Image software (U. S. National Institutes of Health, Bethesda, MD, USA) to measure fat globules’ dimensions. Furthermore, dos Santos and Pereira-Filho [87] used a scanner with a black cover to acquire images of 5 mL of milk poured into a beaker with either bromophenol blue or bromothymol blue as acid—base indicators. Images were analyzed using Matlab® to calculate different color parameters such as RGB, luminosity, relative colors of RGB calculated by dividing the values by the luminosity; hue, saturation and value (HSV) were also obtained. Multivariate data analysis was performed to develop models to assess whether the milk samples were adulterated or not.

4. Machine Learning

Machine learning (ML) is a branch of artificial intelligence, which refers to a computer-based system that may be trained to find patterns among a dataset to classify or predict specific parameters, and it is able to improve its performance by feeding new data [21,30]. Machine learning may be divided into supervised and unsupervised algorithms, which, at the same time, may be classified into different subtypes. However, this paper will only focus on the supervised group, as it the type that has been mostly applied to food and beverages; it may be divided into (i) classification or pattern recognition and (ii) regression algorithms [30,88]. The classification learners are used to categorize samples into different groups and have been applied for different purposes in distinct fields such as agriculture [50], medical diagnosis, food and beverages [17,18]. Some of the main classifier types consist of (i) decision trees, (ii) discriminant analysis, (iii) logistic regression, (iv) naïve Bayes, (v) support vector machines, (vi) nearest neighbor, (vii) ensemble and (viii) artificial neural networks (ANN) [89]. Regression or fitting learners are usually employed to predict specific attributes or parameters such as chemometrics, microbial counts, and intensities of sensory descriptors, among others, and have been used in areas such as agriculture [90], food and beverages [9,14], among others. This type of ML may be classified as (i) linear regression, (ii) regression trees, (iii) support vector machines, (iv) Gaussian process regression, (v) ensembles of trees, and (vi) ANN [89]. Both main types of supervised ML are subcategorized into different algorithms, as shown in Figure 2. Some of the common software to develop ML modeling include Matlab®, Scikit-learn and TensorFlow, which are modules designed for Python [91,92], Weka (The University of Waikato, Hamilton, NZ), which may be used as stand-alone or integrated as a package in R software (RStudio, Inc., Boston, MA, USA) [93,94], among others.
The use of ML has been increasing in recent years in the food and beverage industry due to its ability to improve production and assess the quality in a faster, more accurate, objective, and cost-effective way. The industry needs for the implementation of ML have been derived from the fact that around 95% of food and beverages fail within three years of being launched. Therefore, ML models have been developed to predict consumers’ needs, acceptability, sensory descriptors of the products, and physicochemical composition, among other quality traits, which aid in the development of higher quality products with greater acceptability from consumers [95]. However, a common issue found in ML modeling is the overfitting, which is given when the model lacks generalization of the data. This usually happens when there is limited data in the training set, and sampling noise exists, where this leads to an apparent accurate training stage, but it will not be able to perform correctly when testing new samples [96,97].

4.1. Machine Learning in Alcoholic Beverages

A broad application of ML has been made for alcoholic beverages, especially in the last decade. In spirits, it has been used to develop models to classify whisky (whiskey) samples according to (i) age, (ii) cask material, (iii) distillery and (iv) variety, using Ramah spectra (600–1800 cm−1) as inputs, wherein the authors compared different ML algorithms, obtaining the best results, using relevance radial basis function networks with an accuracy > 95% [98]. Ceballos-Magaña et al. [99] analyzed the mineral content in tequilas from different regions, and developed models using the linear support vector machine (SVM) algorithm and testing different percentages of data division from 40 to 70% for training, thus obtaining accuracies within the 96–100% range to classify samples per region for authenticity purposes. Another application of ML in tequila was published by Andrade et al. [100] who analyzed samples with an ultraviolet-visible (UV-VIS) spectrometer and used the absorbance values within the 250–550 nm range as inputs to classify samples into three different types: (i) White, (ii) rested and (iii) aged. The authors compared different algorithms from discriminant analysis, SVM, and counter-propagation ANN, getting the best results from quadratic discriminant analysis combined with principal components analysis (PCA) with an accuracy of 89%. On the other hand, Rodrigues et al. [101] assessed the chemical components and color in RGB and CIELab scales of Cachaça samples and used those values as inputs to develop ANN models to classify the beverages according to (i) age and (ii) type of wood used for aging.
Several studies have used different ML algorithms to classify or predict distinct parameters related to wine quality. Er and Atasoy [102] developed two ML models, comparing three different classifiers (random forest, support vector machine and k-nearest neighbors) (i) to predict the type of wine (red or white) and (ii) to group wine samples according to their quality based on sensory ratings, both models using physicochemical data as inputs, and obtaining the best results with a random forest algorithm. However, the models to predict quality had a moderate to low accuracy of ~60–70%. Furthermore, the authors did not explain clearly how they obtained, grouped, or defined wine quality based on sensory ratings, and yet they stated that some of those values were based on only three participants, which makes the results less objective and reliable. Da Costa et al. [103] developed a model using SVM to classify Cabernet Sauvignon wines according to their country of origin (Chile and Brazil) using physicochemical data as inputs, with an accuracy of 89%. Perrot et al. [104] developed a decision tool based on machine learning named FGRAPEDBN, which is a combination of Fuzzy logic and dynamic Bayesian network to predict the maturity of grapes for wine production, obtaining determination coefficients R2 = 0.82 for sugar content and R2 = 0.77 for total acidity. Lvova et al. [105] used an electronic tongue (e-tongue), which consists of a device with an array of sensors capable of mimicking the human taste sense through the use of a chemometric processing technique [106], and in the case of the mentioned study, the e-tongue had eight potentiometric chemical sensors to measure different wine samples from Primitivo and Negroamaro varieties, and used partial least squares regression—discriminant analysis to classify samples into the type of wine with an accuracy of 71%. Furthermore, these authors developed a model to classify Negroamaro samples into the control and those with faults with a 97% accuracy. On a more recent study, Fuentes et al. [107] constructed an ANN model using the sequential order weight bias training algorithm, and employing the canopy temperature, infrared index and crop water stress index from infrared-thermal images from grapevine leaves as inputs to detect those contaminated due to smoke from bushfires, which would potentially be used to predict smoke taint in grapes; this model had an accuracy of 96%. The same authors developed an ANN model also using a sequential order weight bias algorithm to predict guaiacol glycoconjugates in berries and wine, and 4-methyl guaiacol in wine using near-infrared absorbance values within the 700–1100 nm spectra as inputs, obtaining a correlation coefficient R = 0.97. On the other hand, Navajas et al. [108] presented an SVM regression model to predict astringency in wine using their chemical composition as inputs with a root mean squared error (RMSE) = 0.19. Fuentes et al. [109] used weather data from vertical vintages (2008–2013) to develop two regression ANN models with the Levenberg Marquardt training algorithm to predict (i) twenty one volatile aroma compounds in the wine and (ii) eight chemical parameters in the wine obtained from those vintages.
This was presented as a potential method to obtain anticipated information to winemakers about the product, which would allow early decisions based upon the expected wine quality.
In beer, there have been many applications of ML for different classifications and predictions regarding quality, either during the brewing process or for the final product. Cetó et al. [110] analyzed commercial beer samples using an e-tongue and applied these results to develop a model with linear discriminant analysis in order to classify the samples according to the beer style with 82% accuracy. In another study, an SVM algorithm was used to classify beers according to their country of origin with minerals and polyphenols content as inputs, obtaining a highly accurate model (99%) [111]. Rousu et al. [112] developed a decision tree model to classify beer fermentation into slow or fast with an accuracy > 95% at laboratory scale and 70% at an industrial scale. In that study, the authors also tested a regression model using ANN to predict the fermentation time; however, no accuracy or correlation coefficient was reported, and they claimed that backpropagation was used, but did not specify the specific training algorithm. Santos and Lozano [113] used an electronic nose (e-nose), which consists of a device with an array of gas sensors that may be a metal oxide or polymer semiconductors capable of mimicking the olfactory system [114] to analyze two main beer off-odors, acetaldehyde and ethyl acetate; the authors used the output values from the e-nose as inputs to develop a probabilistic neural network model with 94% accuracy in the validation stage to predict whether those compounds fell above the threshold, which are considered as defects in beer. Voss et al. [115] developed an e-nose with 13 different gas sensors to analyze beers, and used these responses as inputs in an extreme learning machine model to predict alcohol content with RMSE = 0.63 in validation and RMSE = 0.33 in the testing stage; however, the authors did not mention the correlation or determination coefficient values for this method. Zhang et al. [116] used SVM to construct a model using the fermentation parameters to predict the acetic acid (vinegar) content in the beer; however, the reported validation accuracy was low (R < 0.60). On the other hand, Gonzalez Viejo et al. [7] developed an ANN model using a scaled conjugate gradient training algorithm to classify beers according to the type of fermentation, top, bottom or spontaneous, using color and foam-related parameters as inputs, and obtaining an accuracy of 92%. Those same authors used the aforementioned inputs to predict the intensity of ten sensory descriptors with ANN and the Levenberg Marquardt algorithm, possessing an overall accuracy R = 0.91 [14], and to predict consumers’ acceptability using ANN and Bayesian Regularization algorithm (R = 0.98) [30]. Furthermore, Gonzalez Viejo et al. [18] used the physiological and emotional responses from consumers when tasting different beers to develop an ANN model based on a scaled conjugate gradient to classify the samples into low and high liking of mouthfeel, flavor and overall liking with > 80% accuracy.

4.2. Machine Learning in Hot Beverages

Compared to other types of beverages, the application of ML has been less explored in hot drinks. Nevertheless, some authors have developed models, especially ANN, using different inputs to predict the quality of green or black tea. Yu et al. [117] were able to accurately (>85%) classify green teas according to the quality grade using both backpropagation neural networks and probabilistic neural networks with the outputs of an e-nose as inputs of the models. Similarly, Chen et al. [118] used outputs from an e-nose to construct a model using SVM to classify green tea into quality grades according to results from a sensory panel, obtaining an accuracy of 95% in the testing stage. Cimpoiu et al. [119] developed an ANN model using the flavonoids, catechins, and total methyl-xanthines content to predict the antioxidant activity with an R = 0.99; however, they did not specify the training algorithm used. In the same study, the authors developed a probabilistic neural network model to classify the samples into the type of tea (green, black or express black) using the chemical compounds as inputs, claiming an accuracy of 100%, but they only used five samples for testing, which is not enough to test a model, and thus risked a high probability of over-fitting. Guo et al. [120] used results from near-infrared spectroscopy from 1,000 to 2,500 nm as inputs to predict free amino-acids in tea through ANN backpropagation algorithms with R = 0.96.
Other authors were able to model the prediction of sensory quality perception using physical data from the green tea leaves with the radial basis function (R = 0.95) [121].
A few recent studies related to quality modeling of coffee have been published, Messias et al. [122] used ANN based on the Levenberg Marquardt algorithm with reducing sugars as inputs to classify into Arabica coffee quality grades according to results from sensory analysis, achieving an accuracy of 80%. Morais de Oliveira et al. [72] used the CIELab color parameters of coffee beans to classify them according to their color through ANN with a 100% accuracy; however, that perfect classification is due to the direct relationship of the categories and the inputs, which makes the model senseless and useless. Other authors have developed a model to predict the roasting degree of coffee using results from hyperspectral images (874–1734 nm) through support vector machine with a 90% accuracy [73]. Dominguez et al. [123] measured Mexican coffee samples using an e-tongue and developed ML models with SVM and LDA to classify the samples into different coffee growing conditions, obtaining an accuracy of 88% for LDA and 96% for SVM. Romani et al. [124] used an e-nose composed of an array of ten sensors to measure coffee samples with different roasting levels and developed a general regression neural network model using the e-nose responses as inputs to predict the roasting time, obtaining a high accuracy R2 = 0.98; however, the authors developed the model with only eight observations, which is not enough for modeling purposes. More recently, Thazin et al. [125] used e-nose outputs as inputs to predict the level of acidity according to a sensory panel, based on a radial basis function with 95% accuracy.

4.3. Machine Learning in Non-Alcoholic Beverages

There are several studies using ML in non-alcoholic beverages; however, it has not been applied extensively for water quality assessment, and nothing has been done in bottled water. Bucak and Karlin [126] developed an ANN model to assess the quality of drinking water when entering the distribution system, using microbiological and chemical data as inputs with 100% accuracy. Furthermore, Camejo et al. [127] used the k-nearest neighbor classifier to group drinking water from Portugal and Canada into medium and high quality with chemometrics as inputs (accuracy: 98%). A similar approach was taken by Chatterjee et al. [128] using ANN coupled with a multi-objective genetic algorithm with chemical compounds as inputs, achieving a 97% accuracy.
No recent studies have been published regarding the application of ML in soft drinks; however, there are some papers in fruit juices. The use of e-nose outputs as inputs to model fruit juices’ quality has been popular, Qiu et al. [129] used extreme ML to classify strawberry juice samples according to the processing treatment with 100% accuracy and R = 0.82 to quantify vitamin C. Hong et al. [130] used a combination of e-nose outputs and chemometrics as inputs using linear discriminant analysis to classify tomato juice quality grade (accuracy: 98%). Likewise, Qiu and Wang [131] also used e-nose and chemometrics data as inputs, but with the objective of predicting food additives added to fruit juices with linear discriminant analysis, obtaining accuracies of >85% to predict the amount of chitosan and benzoic acid. Nandeshwar et al. [132] used linear discriminant analysis to identify if orange juice samples were adulterated with either tap water or sugar, achieving an accuracy of 87%. On the other hand, Rácz et al. [133] used near-infrared spectroscopy to measure the transmittance of 90 energy drinks, and developed machine learning models using four different methods: (i) Linear discriminant analysis, (ii) partial least squares discriminant analysis, (iii) random forest and (iv) boosted trees to classify the samples into three groups according to the sugar concentration. The best performance based on the receiver operating characteristics curve was obtained with the boosted trees > 90% true positive values. In a different publication, the same authors presented two machine learning regression models to predict (i) the sugar and (ii) caffeine content of energy drinks. Fourier-transform near-infrared spectroscopy data was used to develop the models with partial least squares regression algorithms obtaining a high determination coefficient for both models; R2 = 0.94 for the sugar and R2 = 0.97 for the caffeine model [134].
A few papers have been published using milk beverages as samples, such as the development of a backpropagation ANN model with sensory descriptors as inputs to predict the overall acceptability of coffee-flavored milk (R = 0.99). Mamat and Samad [135] classified flavored milk according to their brand, using as inputs an e-nose and color parameters and SVM, obtaining an accuracy of 97%. Due to existing problems with milk adulteration, some authors have used ML as an approach to detect this. Balabin and Smirnov [136] measured liquid milk, infant formula and milk powder with near-infrared spectroscopy > 1110 nm, and used those data as inputs to predict melamine content through ANN compared with SVM, achieving RMSE values between 0.25 and 6.10 ppm for low and high melamine content, respectively. Dos Santos and Pereira-Filho [87] used bromophenol blue or bromothymol blue as acid—base indicators in milk, and analyzed color using image analysis; these data were used as inputs to develop a partial least squares regression method to detect adulterated samples with an R = 0.94.

5. Biometrics

The term biometrics refers to the methods that may be used in humans or animals to identify or recognize their physiological and behavioral distinctive characteristics. This technology is often used in humans for authentication purposes, and the most popular techniques are face recognition, fingerprinting, voice recognition, retinal scanners, and body temperature, among others [137]. However more recently, these techniques have been applied to gather more information about consumers when evaluating products such as food, beverages, and packaging. It has been used as a tool to tap into the unconscious responses from the autonomic nervous system, which, along with other measurements such as emotional and cognitive, has shown to provide more precise data from consumers’ attitudes towards products to assess acceptability, quality perception and decision making [17,18].
In the assessment of food and beverages, face recognition has been used to analyze facial expressions that may be related to emotions. Some commercial software such as FaceReader™ (Noldus Information Technology, Wageningen, Netherlands) and Affectiva (Affectiva, Boston, MA, USA) have the capability of detecting and tracking the human face using the Viola-Jones cascade detector algorithm [138], as well as the macro- and micro-movements of different features using the active appearance model in the case of FaceReader™ (Figure 3), and the histogram of the oriented gradient for the Affectiva. Then they use ML (ANN for FaceReader™ and SVM for Affectiva) developed through a database of movements, which have been associated with facial expressions and translated into emotions such as happiness or joy, sadness, disgust, contempt, anger, neutral and scared or fear, among others (Figure 3) [139,140]. To record those videos during sensory evaluation, an integrated camera system, which consists of an infrared-thermal camera FLIR AX8™ (FLIR Systems, Wilsonville, OR, USA), and a tablet coupled with a novel Bio-Sensory App (The University of Melbourne, Melbourne, Vic, Australia), has been developed. This system is able to display a sensory questionnaire and capture the videos and infrared thermal images of participants while tasting the food or beverage samples [141].
Body or skin temperature is often used as biometrics, and there are different ways to measure it, such as using sensors attached to the body, which is usually the hand or remotely with infrared-thermal cameras by assessing the temperature from the eye section, which is the closest to body temperature (Figure 4) [18,142,143]. On the other hand, typical ways to measure heart rate consist of placing electrodes on the chest, ear lobe or finger; however, these methods are considered as invasive or intrusive, which make the participants aware of the sensors and alter their physiological responses [142,144]. Therefore, some non-invasive methods which consist of measuring heart rate responses using video analysis have been developed; these methods are based on photoplethysmography, as they measure the luminosity changes due to blood flow in the face [145,146]. A recent study using this type of method was published [147], in which a non-contact technique was developed to assess heart rate and blood pressure using videos from participants based on the luminosity changes in the green channel and machine learning modeling with high accuracy (R = 0.85) when using results from an oscillometric blood pressure monitor as target values. On the other hand, eye tracking is used to detect and follow gaze movements and position when looking at a particular sample or area of interest. It works using camera-based sensors that use infrared light to track the gaze fixations, assess pupil dilation and gaze direction. In the food and beverage industries, it is usually used to assess labels and packaging to evaluate consumers acceptability and behavior [17,148].

5.1. Biometrics in Alcoholic Beverages

Emotions assessment through face recognition has been the most used biometric to assess alcoholic beverages. Kamboj et al. [149] assessed the effect of alcohol drinks on the facial expressions of consumers; the authors used the Abrosoft Fantamorph software (Abrosoft Co., Beijing, China) and were able to assess emotions such as being happy or angry, having fear, sadness, disgust and neutral. It was found that participants that consumed moderate alcohol doses expressed higher levels of neutral emotion than those who consumed high alcohol or a placebo, and those who tasted high alcohol drinks presented higher values of disgust. Beyts et al. [150] assessed heart rate using an electrocardiogram with electrodes attached below the collar bone, skin temperature with a sensor placed on the forearm, and facial movements using two electrodes placed on the forehead and left cheek, to evaluate consumers’ responses to beer aromas. Results showed no significant (p ≥ 0.05) differences between samples for heart rate and skin temperature, but significant for facial expression responses. Other authors [18] evaluated nine different beer samples using the Bio-Sensory application [141] to record videos and infrared thermal images while consumers taste the samples. The authors measured the emotional (FaceReader™), physiological responses, such as heart rate using video analysis, and body temperature using a FLIR AX8™ camera, and brainwave data using an electroencephalogram (EEG) headset. Results showed relationships between the biometric and self-reported responses, such as a negative association between temperature and liking of foam height and between disgusted and liking of foam stability.
The same authors conducted another study using similar methods to obtain the emotional and physiological responses, but including eye-tracking techniques (TheEyeTribe©, Copenhagen, S. Denmark) to assess beer samples acceptability from the visual characteristics, especially focused on foam and bubbles and coupled with the use of RoboBEER parameters. The authors found that body temperature was negatively-related to the liking of clarity, and heart rate was positively-correlated with perceived quality [17].

5.2. Biometrics in Hot Beverages

There are barely any studies published using biometrics to assess consumers’ acceptability in hot drinks; however, a study was found using coffee. Garcia-Burgos and Zamora [151] measured disgust and happy emotions using FaceReader™ to assess consumers’ responses towards bitter drinks, using coffee within the sample set when subjected to stressors. They found that stress-related images decreased the disgust of participants when tasting coffee.

5.3. Biometrics in Non-Alcoholic Beverages

In non-alcoholic drinks, a few studies have been conducted using biometrics and self-reported sensory responses. De Wijk et al. [142] assessed breakfast beverages such as drink yogurts and fruit drinks by evaluating the physiological and emotional responses from consumers. The authors analyzed skin conductance and skin temperature using electrodes placed on the palm, heart rate with sensors attached to the chest, and facial expressions using FaceReader™ software. From the results, they found a relationship between the liking of samples, and heart rate and skin temperature as well as neutral expressions. Danner et al. [152] conducted a study with different juice and vegetable juices to assess biometrics from consumers. The measurements done consisted of facial expressions using FaceReader™, skin conductance, skin temperature, and heart rate with Biofeedback 2000x-pert electrodes (Assessment Systems, Praha, Czech Republic) attached to the forefinger. The reported results showed a correlation between the liking of the samples and skin conductance. Furthermore, another study using FaceReader™ to evaluate the facial expressions of consumers towards orange juice samples was conducted, finding a correlation between liking, and happy and disgusted emotions in liked and disliked juices, respectively [153].

6. Artificial Intelligence

The concept of artificial intelligence (AI) dates back to the 1960s in which John McCarthy came up with the ides of automating machines, and created an AI laboratory at Stanford University [154]. In general, the term AI refers to the machines that are designed and automated to think, behave, solve problems and make decisions as humans would do, apart from having the ability to improve through self-learning [21]. It may be classified into (i) strong or generalized AI, which is capable of understanding, improving, and solving problems, usually using ML, and (ii) weak or applied AI that is limited to perform specific tasks such as recognizing, searching, or analyzing certain components. Currently, the generalized AI exists in theory, but only the weak or applied AI has been developed [155]. The overall concept of AI may consist of any combination of its different branches, such as ML, and CV; it may also include the use of robotics, sensors, and biometrics (Figure 5). However, the main purpose of AI application is not to fully replace humans, but to develop intelligent systems able to perform accurate, reliable, more rapid and objective tasks or jobs which may be tiring and tedious for humans, and that could lead to errors [21,154]. Furthermore, AI allows the performance of manufacturing processes with higher safety levels, less waste and the ability to produce high-quality products [156].
Regarding the applications of AI in the beverage industry, it has been used for monitoring, quality assurance and control, product development and decision-making, among others [156]. In recent years, some beverage companies have implemented AI, such as Carlsberg, which uses this technology to develop new beers in a cost-effective and rapid way through a combined method using Microsoft® (Microsoft Corporation, Redmond, WA, USA) platform and sensors to determine complex flavors in the products [95]. IntelligentX developed a generalized AI technique to improve beer styles by using feedback from consumers through social media; the system is able to make decisions and characterize the products to create the best beer according to consumers’ needs [31].

7. Key Findings and Future Trends

Despite the increasing trend in the application of emerging technologies, which involve the use of robotics, ML, CV, and biometrics in the beverage industry, there are still several gaps to be covered, especially in the biometrics field. Robotics science needs to be more explored in beverages as a tool to aid other AI components, which would maximize the use of some emerging methods. Regarding CV, most approaches developed mainly for the assessment of hot drinks and non-alcoholic beverages are based on the analysis of color; however, more research needs to be conducted to apply this technology to measure other parameters related to the quality traits specific to each product. The main issue with ML is that there is still a lack of knowledge among researchers concerning the proper development techniques, usage, and interpretation of the algorithms and modeling, as well as the way to select the best models to avoid over- or under-fitting, which are common problems within the existing publications. On the other hand, although biometrics has been used in the sensory science field over the last decade, it needs to be explored more in-depth using beverages as samples to assess their quality by understanding consumers subconscious responses, which would allow the industry to develop products with higher acceptability and quality based on the market trends and needs. Furthermore, the combination of two or more of the aforementioned methods should be considered to be implemented as an approach to AI in the different beverage categories, especially for hot and other non-alcoholic drinks in which these technologies have not been very popular among companies.

Author Contributions

C.G.V., D.D.T., F.R.D., and S.F. contributed equally in the preparation and writing of this review.

Funding

This research received no external funding.

Acknowledgments

C.G.V. was supported by the Melbourne Research Scholarship from the University of Melbourne.

Conflicts of Interest

The authors declare no conflicts of interest.

Glossary of Quality Indicators

AromaVolatile aromatic compounds detected via the retronasal olfactory system
Bubble growthRate at which a bubble increases its size
Bubble hazeLarge number of micro-bubbles formed when the beer is poured, and that circulate the liquid before reaching the surface
Bubble size distributionNumber of small, medium and large bubbles
CIELabColor parameters in which L = lightness, a = red to green, and b = yellow to blue values
ClarityLevel of transparency of the liquid due to lack of suspended particles
CollarArray of bubbles that remains at the edge of the glass
ColorVisual element produced when the light that hits an object is reflected to the eye
DensityMass divided by the volume unit of a liquid
FlavorPerception of basic tastes, aromas and trigeminal sensations during mastication
FoamabilityCapacity to form foam
Foam drainageExcess of liquid drained from the wet foam to produce dry foam
Foam stabilityLifetime of foam
Foam thicknessViscosity of the foam
Foam volumeAmount of foam in mL
Foam velocityRate at which the foam collapses
Off-odorsOdors that are not characteristic of the product and are considered as faults
pHMeasurement of the acidity or alkalinity based on the number of hydrogen ions
RGBColor in Red, Green and Blue scale
TastePerception through the receptor cells found in the papillae
TextureSensory characteristic of the solid or rheological state of a product
ViscosityConsistency of a liquid which may vary from thin to thick
Water hardnessHigh mineral concentration in water

References

  1. Pang, X.-N.; Li, Z.-J.; Chen, J.-Y.; Gao, L.-J.; Han, B.-Z. A Comprehensive Review of Spirit Drink Safety Standards and Regulations from an International Perspective. J. Food Prot. 2017, 80, 431–442. [Google Scholar] [CrossRef] [PubMed]
  2. Pushpangadan, P.; Dan, V.M.; Ijinu, T.; George, V. Food, Nutrition and Beverage. Indian J. Tradit. Knowl. 2012, 11, 26–34. [Google Scholar]
  3. Mise, J.K.; Nair, C.; Odera, O.; Ogutu, M. Factors influencing brand loyalty of soft drink consumers in Kenya and India. Int. J. Bus. Manag. Econ. Res. 2013, 4, 706–713. [Google Scholar]
  4. Schwarz, B.; Bischof, H.-P.; Kunze, M. Coffee, tea, and lifestyle. Prev. Med. 1994, 23, 377–384. [Google Scholar] [CrossRef] [PubMed]
  5. Plutowska, B.; Wardencki, W. Application of gas chromatography–olfactometry (GC–O) in analysis and quality assessment of alcoholic beverages–A review. Food Chem. 2008, 107, 449–463. [Google Scholar] [CrossRef]
  6. Viejo, C.G.; Fuentes, S.; Torrico, D.D.; Godbole, A.; Dunshea, F.R. Chemical characterization of aromas in beer and their effect on consumers liking. Food Chem. 2019, 293, 479–485. [Google Scholar] [CrossRef]
  7. Viejo, C.G.; Fuentes, S.; Li, G.; Collmann, R.; Condé, B.; Torrico, D. Development of a robotic pourer constructed with ubiquitous materials, open hardware and sensors to assess beer foam quality using computer vision and pattern recognition algorithms: RoboBEER. Food Res. Int. 2016, 89, 504–513. [Google Scholar] [CrossRef]
  8. Bamforth, C.; Russell, I.; Stewart, G. Beer: A Quality Perspective; Academic press: Cambridge, MA, USA, 2011. [Google Scholar]
  9. Gonzalez Viejo, C.; Fuentes, S.; Torrico, D.; Howell, K.; Dunshea, F.R. Assessment of beer quality based on foamability and chemical composition using computer vision algorithms, near infrared spectroscopy and artificial neural networks modelling techniques. J. Sci. Food Agric. 2018, 98, 618–627. [Google Scholar] [CrossRef]
  10. Belitz, H.-D.; Grosch, W.; Schieberle, P. Coffee, tea, cocoa. In Food Chemistry; Springer: Berlin/Heidelberg, Germany, 2009; pp. 938–970. [Google Scholar]
  11. Cullen, P.; Cullen, P.J.; Tiwari, B.K.; Valdramidis, V. Novel Thermal and Non-Thermal Technologies for Fluid Foods; Academic Press: San Diego, CA, USA, 2011. [Google Scholar]
  12. Wang, L.; Sun, D.-W.; Pu, H.; Cheng, J.-H. Quality analysis, classification, and authentication of liquid foods by near-infrared spectroscopy: A review of recent research developments. Crit. Rev. Food Sci. Nutr. 2017, 57, 1524–1538. [Google Scholar] [CrossRef]
  13. Piper, D.; Scharf, A. Descriptive Analysis: State of the Art and Recent Developments; ForschungsForum eV: Göttingen, Germany, 2004. [Google Scholar]
  14. Gonzalez Viejo, C.; Fuentes, S.; Torrico, D.D.; Howell, K.; Dunshea, F.R. Assessment of Beer Quality Based on a Robotic Pourer, Computer Vision, and Machine Learning Algorithms Using Commercial Beers. J. Food Sci. 2018, 83, 1381–1388. [Google Scholar] [CrossRef]
  15. Kemp, S.; Hollowood, T.; Hort, J. Sensory Evaluation: A Practical Handbook; Wiley: Oxford, UK, 2011. [Google Scholar]
  16. Stone, H.; Bleibaum, R.; Thomas, H.A. Sensory Evaluation Practices; Elsevier: Amsterdam, The Netherlands; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  17. Gonzalez Viejo, C.; Fuentes, S.; Howell, K.; Torrico, D.; Dunshea, F.R. Robotics and computer vision techniques combined with non-invasive consumer biometrics to assess quality traits from beer foamability using machine learning: A potential for artificial intelligence applications. Food Control 2018, 92, 72–79. [Google Scholar] [CrossRef]
  18. Gonzalez Viejo, C.; Fuentes, S.; Howell, K.; Torrico, D.D.; Dunshea, F.R. Integration of non-invasive biometrics with sensory analysis techniques to assess acceptability of beer by consumers. Phys. Behav. 2019, 200, 139–147. [Google Scholar] [CrossRef] [PubMed]
  19. Gill, G.S.; Kumar, A.; Agarwal, R. Monitoring and grading of tea by computer vision–A review. J. Food Eng. 2011, 106, 13–19. [Google Scholar] [CrossRef]
  20. Ceccarelli, M. Fundamentals of Mechanics of Robotic Manipulation; Springer: Dordrecht, The Netherlands, 2004. [Google Scholar]
  21. Dell Technologies. The Difference Between AI, Machine Learning, and Robotics. Available online: https://www.delltechnologies.com/en-us/perspectives/the-difference-between-ai-machine-learning-and-robotics/ (accessed on 15 August 2019).
  22. Nayik, G.A.; Muzaffar, K.; Gull, A. Robotics and food technology: A mini review. J. Nutr. Food Sci. 2015, 5, 1–11. [Google Scholar]
  23. Iqbal, J.; Khan, Z.H.; Khalid, A. Prospects of robotics in food industry. Food Sci. Technol. 2017, 37, 159–165. [Google Scholar] [CrossRef]
  24. Caldwell, D.G. Robotics and Automation in the Food Industry: Current and Future Technologies; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  25. MAKR SHAKR Srl. MAKR SHAKR. Available online: https://www.makrshakr.com/ (accessed on 15 August 2019).
  26. Wilson, A. Vision and Robots Team up for Wine Production. 2016. Available online: https://www.vision-systems.com/non-factory/article/16736826/vision-and-robots-team-up-for-wine-production/ (accessed on 17 August 2019).
  27. Condé, B.C.; Fuentes, S.; Caron, M.; Xiao, D.; Collmann, R.; Howell, K.S. Development of a robotic and computer vision method to assess foam quality in sparkling wines. Food Control 2017, 71, 383–392. [Google Scholar] [CrossRef]
  28. Peano, D.; Chiaberge, M. Innovative Beer Dispenser Based on Collaborative Robotics. Ph.D. Thesis, Politecnico di Torino, Torino, Italy, 2018. [Google Scholar]
  29. Yasui, K.; Yokoi, S.; Shigyo, T.; Tamaki, T.; Shinotsuka, K. A customer-oriented approach to the development of a visual and statistical foam analysis. J. Am. Soc. Brew. Chem. 1998, 56, 152–158. [Google Scholar] [CrossRef]
  30. Gonzalez Viejo, C.; Torrico, D.D.; Dunshea, F.R.; Fuentes, S. Development of Artificial Neural Network Models to Assess Beer Acceptability Based on Sensory Properties Using a Robotic Pourer: A Comparative Model Approach to Achieve an Artificial Intelligence System. Beverages 2019, 5, 33. [Google Scholar] [CrossRef]
  31. Marr, B. How Artificial Intelligence Is Used to Make Beer. Forbes. 2019. Available online: https://www.forbes.com/sites/bernardmarr/2019/02/01/how-artificial-intelligence-is-used-to-make-beer/#35b077d070cf (accessed on 1 February 2019).
  32. Hutson, M. Beer-slinging robot predicts whether you’ll give that brew a thumbs up—or down. Science 2018. [Google Scholar] [CrossRef]
  33. Kayaalp, K.; Ceylan, O.; Süzen, A.A.; Yildiz, Z. Internet Controlled Smart Tea Machine Design with Arduino and Tea Consumption Analysis. Uluborlu Mesl. Bilimler Derg. 2018, 1, 29–37. [Google Scholar]
  34. Buhr, S. Meet Teforia, A Tea Brewing Robot For The Home. Techcrunch. 30 October 2015. Available online: https://techcrunch.com/2015/10/29/meet-teforia-a-tea-brewing-robot-for-the-home/ (accessed on 1 February 2019).
  35. Buhr, S. Taste Testing With teaBOT, The Robot That Brews Up Loose Leaf Tea In Under 30 Seconds. Techcrunch. 24 July 2015. Available online: https://techcrunch.com/2015/07/23/taste-testing-with-teabot-the-robot-that-brews-up-loose-leaf-tea-in-under-30-seconds/ (accessed on 18 June 2019).
  36. Coward, C. Mugsy, the Raspberry Pi-Powered Coffee Maker, Is Nearing Production. Medium Corporation. 2019. Available online: https://www.hackster.io/news/mugsy-the-raspberry-pi-based-robotic-coffee-maker-is-now-on-kickstarter-8a24f38ffbe6 (accessed on 15 July 2019).
  37. Budds, D. Can a $25,000 robot make better coffee than a barista? Curbed; Vox Media, Inc., 23 February 2018. Available online: https://www.curbed.com/2018/2/23/17041842/cafe-x-automated-coffee-robot-ammunition-design (accessed on 15 July 2019).
  38. Do, C.; Burgard, W. Accurate pouring with an autonomous robot using an RGB-D camera. In Proceedings of the International Conference on Intelligent Autonomous Systems, Singapore, 1–3 March 2018; pp. 210–221. [Google Scholar]
  39. Morita, T.; Kashiwagi, N.; Yorozu, A.; Walch, M.; Suzuki, H.; Karagiannis, D.; Yamaguchi, T. Practice of multi-robot teahouse based on printeps and evaluation of service quality. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; pp. 147–152. [Google Scholar]
  40. Cai, D.C.; Chen, J.F.; Chang, Y.W. Design and Development of Beverage Maker. Appl. Mech. Mater. 2014, 590, 581–585. [Google Scholar] [CrossRef]
  41. Albrecht, C. Alberts Brings Robot Smoothie Stations to Europe. The Spoon. 18 April 2018. Available online: https://thespoon.tech/alberts-brings-robot-smoothie-stations-to-europe/ (accessed on 15 July 2019).
  42. Awwad, S.; Tarvade, S.; Piccardi, M.; Gattas, D.J. The use of privacy-protected computer vision to measure the quality of healthcare worker hand hygiene. Int. J. Qual. Health Care 2018, 31, 36–42. [Google Scholar] [CrossRef] [PubMed]
  43. Wu, D.; Sun, D.-W. Colour measurements by computer vision for food quality control–A review. Trends Food Sci. Technol. 2013, 29, 5–20. [Google Scholar] [CrossRef]
  44. Lukinac, J.; Mastanjević, K.; Mastanjević, K.; Nakov, G.; Jukić, M. Computer Vision Method in Beer Quality Evaluation—A Review. Beverages 2019, 5, 38. [Google Scholar] [CrossRef]
  45. Solem, J.E. Programming Computer Vision with Python: Tools and Algorithms for Analyzing Images; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2012. [Google Scholar]
  46. Marques, O. Practical Image and Video Processing Using MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  47. Sun, D.-W. Inspecting pizza topping percentage and distribution by a computer vision method. J. Food Eng. 2000, 44, 245–249. [Google Scholar] [CrossRef]
  48. Sarangi, P.; Mishra, B.; Majhi, B.; Dehuri, S. Gray-level image enhancement using differential evolution optimization algorithm. In Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014; pp. 95–100. [Google Scholar]
  49. Vala, H.J.; Baxi, A. A review on Otsu image segmentation algorithm. Intern. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 2013, 2, 387–389. [Google Scholar]
  50. Fuentes, S.; Hernández-Montes, E.; Escalona, J.; Bota, J.; Viejo, C.G.; Poblete-Echeverría, C.; Tongson, E.; Medrano, H. Automated grapevine cultivar classification based on machine learning using leaf morpho-colorimetry, fractal dimension and near-infrared spectroscopy parameters. Comput. Electron. Agric. 2018, 151, 311–318. [Google Scholar] [CrossRef]
  51. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Science & Business Media: London, UK, 2010. [Google Scholar]
  52. Sun, D.-W. Computer Vision Technology for Food Quality Evaluation; Academic Press: Cambridge, MA, USA, 2016. [Google Scholar]
  53. Rodrigues, B.U.; da Costa, R.M.; Salvini, R.L.; da Silva Soares, A.; da Silva, F.A.; Caliari, M.; Cardoso, K.C.R.; Ribeiro, T.I.M. Cachaça classification using chemical features and computer vision. Procedia Comput. Sci. 2014, 29, 2024–2033. [Google Scholar] [CrossRef]
  54. Pessoa, K.D.; Suarez, W.T.; dos Reis, M.F.; Franco, M.d.O.K.; Moreira, R.P.L.; dos Santos, V.B. A digital image method of spot tests for determination of copper in sugar cane spirits. Spectrochim. Acta Part A: Mol. Biomol. Spectrosc. 2017, 185, 310–316. [Google Scholar] [CrossRef]
  55. Wang, Y.; Zhou, B.; Zhang, H.; Ge, J. A vision-based intelligent inspector for wine production. Intern. J. Mach. Learn. Cybern. 2012, 3, 193–203. [Google Scholar] [CrossRef]
  56. Martin, M.L.G.-M.; Ji, W.; Luo, R.; Hutchings, J.; Heredia, F.J. Measuring colour appearance of red wines. Food Qual. Preference 2007, 18, 862–871. [Google Scholar] [CrossRef]
  57. Pérez-Bernal, J.L.; Villar-Navarro, M.; Morales, M.L.; Ubeda, C.; Callejón, R.M. The smartphone as an economical and reliable tool for monitoring the browning process in sparkling wine. Comput. Electron. Agric. 2017, 141, 248–254. [Google Scholar] [CrossRef]
  58. Arakawa, T.; Iitani, K.; Wang, X.; Kajiro, T.; Toma, K.; Yano, K.; Mitsubayashi, K. A sniffer-camera for imaging of ethanol vaporization from wine: The effect of wine glass shape. Analyst 2015, 140, 2881–2886. [Google Scholar] [CrossRef] [PubMed]
  59. Cilindre, C.; Liger-Belair, G.; Villaume, S.; Jeandet, P.; Marchal, R. Foaming properties of various Champagne wines depending on several parameters: Grape variety, aging, protein and CO2 content. Anal. Chim. Acta 2010, 660, 164–170. [Google Scholar] [CrossRef] [PubMed]
  60. Crumpton, M.; Rice, C.J.; Atkinson, A.; Taylor, G.; Marangon, M. The effect of sucrose addition at dosage stage on the foam attributes of a bottle-fermented English sparkling wine. J. Sci. Food Agric. 2018, 98, 1171–1178. [Google Scholar] [CrossRef]
  61. Silva, T.; Godinho, M.S.; de Oliveira, A.E. Identification of pale lager beers via image analysis. Lat. Am. Appl. Res. 2011, 41, 141–145. [Google Scholar]
  62. Fengxia, S.; Yuwen, C.; Zhanming, Z.; Yifeng, Y. Determination of beer color using image analysis. J. Am. Soc. Brew. Chem. 2004, 62, 163–167. [Google Scholar] [CrossRef]
  63. Hepworth, N.; Varley, J.; Hind, A. Characterizing gas bubble dispersions in beer. Food Bioprod. Process. 2001, 79, 13–20. [Google Scholar] [CrossRef]
  64. Hepworth, N.; Hammond, J.; Varley, J. Novel application of computer vision to determine bubble size distributions in beer. J. Food Eng. 2004, 61, 119–124. [Google Scholar] [CrossRef]
  65. Cimini, A.; Pallottino, F.; Menesatti, P.; Moresi, M. A low-cost image analysis system to upgrade the rudin beer foam head retention meter. Food Bioprocess Technol. 2016, 9, 1587–1597. [Google Scholar] [CrossRef]
  66. Dong, C.-w.; Zhu, H.-k.; Zhao, J.-w.; Jiang, Y.-w.; Yuan, H.-b.; Chen, Q.-s. Sensory quality evaluation for appearance of needle-shaped green tea based on computer vision and nonlinear tools. J. Zhejiang Univ. Sci. B 2017, 18, 544–548. [Google Scholar] [CrossRef] [PubMed]
  67. Singh, G.; Kamal, N. Machine vision system for tea quality determination-Tea Quality Index (TQI). IOSR J. Eng. 2013, 3, 46–50. [Google Scholar] [CrossRef]
  68. Kumar, A.; Singh, H.; Sharma, S.; Kumar, A. Color Analysis of Black Tea Liquor using Image Processing Techniques. Int. J. Electron. Commun. Technol. 2011, 2, 292–296. [Google Scholar]
  69. Akuli, A.; Pal, A.; Bej, G.; Dey, T.; Ghosh, A.; Tudu, B.; Bhattacharyya, N.; Bandyopadhyay, R. A Machine Vision System for Estimation of Theaflavins and Thearubigins in Orthodox Black Tea. Int. J. Smart Sens. Intell. Syst. 2016, 9, 709–731. [Google Scholar] [CrossRef]
  70. Oblitas Cruz, J.; Castro Silupu, W. Computer vision system for the optimization of the color generated by the coffee roasting process according to time, temperature and mesh size. Ingeniería y Universidad 2014, 18, 355–368. [Google Scholar] [CrossRef]
  71. Várvölgyi, E.; Werum, T.; Dénes, L.; Soós, J.; Szabó, G.; Felföldi, J.; Esper, G.; Kovács, Z. Vision system and electronic tongue application to detect coffee adulteration with barley. Acta Aliment. 2014, 43, 197–205. [Google Scholar] [CrossRef]
  72. De Oliveira, E.M.; Pereira, R.G.F.A.; Leme, D.S.; Barbosa, B.H.G.; Rodarte, M.P. A computer vision system for coffee beans classification based on computational intelligence techniques. J. Food Eng. 2016, 171, 22–27. [Google Scholar] [CrossRef]
  73. Chu, B.; Yu, K.; Zhao, Y.; He, Y. Development of Noninvasive Classification Methods for Different Roasting Degrees of Coffee Beans Using Hyperspectral Imaging. Sensors 2018, 18, 1259. [Google Scholar] [CrossRef]
  74. Piazza, L.; Bulbarello, A.; Gigli, J. Rheological interfacial properties of espresso coffee foaming fractions. In Proceedings of the 13th World Congress of Food Science & Technology 2006, Nantes, France, 17–21 September 2006; p. 873. [Google Scholar]
  75. Buratti, S.; Benedetti, S.; Giovanelli, G. Application of electronic senses to characterize espresso coffees brewed with different thermal profiles. Eur. Food Res. Technol. 2017, 243, 511–520. [Google Scholar] [CrossRef]
  76. Damasceno, D.; Toledo, T.G.; Soares, A.D.S.; De Oliveira, S.B.; De Oliveira, A.E. CompVis: A novel method for drinking water alkalinity and total hardness analyses. Anal. Methods 2016, 8, 7832–7836. [Google Scholar] [CrossRef]
  77. Barker, G.; Jefferson, B.; Judd, S.; Judd, S. The control of bubble size in carbonated beverages. Chem. Eng. Sci. 2002, 57, 565–573. [Google Scholar] [CrossRef]
  78. Viejo, C.G.; Torrico, D.D.; Dunshea, F.R.; Fuentes, S. The Effect of Sonication on Bubble Size and Sensory Perception of Carbonated Water to Improve Quality and Consumer Acceptability. Beverages 2019, 5, 58. [Google Scholar] [CrossRef]
  79. Aliabadi, R.S.; Mahmoodi, N.O. Synthesis and characterization of polypyrrole, polyaniline nanoparticles and their nanocomposite for removal of azo dyes; sunset yellow and Congo red. J. Clean. Prod. 2018, 179, 235–245. [Google Scholar] [CrossRef]
  80. Botelho, B.G.; De Assis, L.P.; Sena, M.M. Development and analytical validation of a simple multivariate calibration method using digital scanner images for sunset yellow determination in soft beverages. Food Chem. 2014, 159, 175–180. [Google Scholar] [CrossRef] [PubMed]
  81. Sorouraddin, M.-H.; Saadati, M.; Mirabi, F. Simultaneous determination of some common food dyes in commercial products by digital image analysis. J. Food Drug Anal. 2015, 23, 447–452. [Google Scholar] [CrossRef]
  82. Hosseininia, S.A.R.; Kamani, M.H.; Rani, S. Quantitative determination of sunset yellow concentration in soft drinks via digital image processing. J. Food Meas. Charact. 2017, 11, 1065–1070. [Google Scholar] [CrossRef]
  83. Heredia, F.; González-Miret, M.; Álvarez, C.; Ramírez, A. DigiFood® (Análisis de imagen). Registro No. SE 1298. 2006. Available online: http://www.https.com//digifood.com/ (accessed on 1 November 2019).
  84. Fernández-Vázquez, R.; Stinco, C.M.; Hernanz, D.; Heredia, F.J.; Vicario, I.M. Colour training and colour differences thresholds in orange juice. Food Qual. Preference 2013, 30, 320–327. [Google Scholar] [CrossRef]
  85. Fernandez-Vazquez, R.; Stinco, C.M.; Melendez-Martinez, A.J.; Heredia, F.J.; Vicario, I.M. Visual and instrumental evaluation of orange juice color: A consumers’preference study. J. Sens. Stud. 2011, 26, 436–444. [Google Scholar] [CrossRef]
  86. Vélez-Ruiz, J.; Barbosa-Cánovas, G. Flow and structural characteristics of concentrated milk. J. Texture Stud. 2000, 31, 315–333. [Google Scholar] [CrossRef]
  87. dos Santos, P.M.; Pereira-Filho, E.R. Digital image analysis–an alternative tool for monitoring milk authenticity. Anal. Methods 2013, 5, 3669–3674. [Google Scholar] [CrossRef]
  88. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT press: Cambridge, MA, USA, 2016. [Google Scholar]
  89. Mathworks Inc. Mastering Machine Learning: A Step-by-Step Guide with MATLAB; Mathworks Inc.: Sherborn, MA, USA, 2018. [Google Scholar]
  90. Romero, M.; Luo, Y.; Su, B.; Fuentes, S. Vineyard water status estimation using multispectral imagery from an UAV platform and machine learning algorithms for irrigation scheduling management. Comput. Electron. Agric. 2018, 147, 109–117. [Google Scholar] [CrossRef]
  91. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  92. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  93. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2016. [Google Scholar]
  94. Hornik, K.; Buchta, C.; Zeileis, A. Open-source machine learning: R meets Weka. Comput. Stat. 2009, 24, 225–232. [Google Scholar] [CrossRef]
  95. Buss, D. Food Companies Get Smart About Artificial Intelligence. Food Technol. 2018, 72, 26–41. [Google Scholar]
  96. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Machine Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  97. Martino, J.C.R. Hands-On Machine Learning with Microsoft Excel 2019: Build complete data analysis flows, from data collection to visualization; Packt Publishing: Birmingham, UK, 2019. [Google Scholar]
  98. Backhaus, A.; Ashok, P.C.; Praveen, B.B.; Dholakia, K.; Seiffert, U. Classifying Scotch Whisky from near-infrared Raman spectra with a Radial Basis Function Network with Relevance Learning. In Proceedings of the 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN, Bruges, Belgium, 25–27 April 2012; pp. 411–416. [Google Scholar]
  99. Ceballos-Magaña, S.G.; Jurado, J.M.; Muñiz-Valencia, R.; Alcázar, A.; de Pablos, F.; Martín, M.J. Geographical authentication of tequila according to its mineral content by means of support vector machines. Food Anal. Methods 2012, 5, 260–265. [Google Scholar] [CrossRef]
  100. Andrade, J.M.; Ballabio, D.; Gómez-Carracedo, M.P.; Pérez-Caballero, G. Nonlinear classification of commercial Mexican tequilas. J. Chemom. 2017, 31, e2939. [Google Scholar] [CrossRef]
  101. Rodrigues, B.U.; Soares, A.d.S.; Costa, R.M.d.; Van Baalen, J.; Salvini, R.; Silva, F.A.d.; Caliari, M.; Cardoso, K.C.R.; Ribeiro, T.I.M.; Delbem, A.C. A feasibility cachaca type recognition using computer vision and pattern recognition. Comput. Electron. Agric. 2016, 123, 410–414. [Google Scholar] [CrossRef]
  102. Er, Y.; Atasoy, A. The classification of white wine and red wine according to their physicochemical qualities. Int. J. Intell. Syst. Appl. Eng. 2016, 4, 23–26. [Google Scholar] [CrossRef]
  103. da Costa, N.L.; Castro, I.A.; Barbosa, R. Classification of cabernet sauvignon from two different countries in South America by chemical compounds and support vector machines. Appl. Artif. Intell. 2016, 30, 679–689. [Google Scholar] [CrossRef]
  104. Perrot, N.; Baudrit, C.; Brousset, J.M.; Abbal, P.; Guillemin, H.; Perret, B.; Goulet, E.; Guerin, L.; Barbeau, G.; Picque, D. A decision support system coupling fuzzy logic and probabilistic graphical approaches for the agri-food industry: Prediction of grape berry maturity. PLoS ONE 2015, 10, e0134373. [Google Scholar] [CrossRef] [PubMed]
  105. Lvova, L.; Yaroshenko, I.; Kirsanov, D.; Di Natale, C.; Paolesse, R.; Legin, A. Electronic tongue for brand uniformity control: A case study of apulian red wines recognition and defects evaluation. Sensors 2018, 18, 2584. [Google Scholar] [CrossRef] [PubMed]
  106. Cetó, X.; Gutiérrez, J.M.; Moreno-Barón, L.; Alegret, S.; Del Valle, M. Voltammetric electronic tongue in the analysis of cava wines. Electroanalysis 2011, 23, 72–78. [Google Scholar] [CrossRef]
  107. Fuentes, S.; Tongson, E.J.; De Bei, R.; Gonzalez Viejo, C.; Ristic, R.; Tyerman, S.; Wilkinson, K. Non-Invasive Tools to Detect Smoke Contamination in Grapevine Canopies, Berries and Wine: A Remote Sensing and Machine Learning Modeling Approach. Sensors 2019, 19, 3335. [Google Scholar] [CrossRef]
  108. Navajas, M.P.S.; del Teso, S.F.; Romero, M.; Dario, P.; Díaz, D.; González, V.F.; Zurbano, P.F. Modelling wine astringency from its chemical composition using machine learning algorithms. OENO ONE 2019, 53, 498–510. [Google Scholar]
  109. Fuentes, S.; Gonzalez Viejo, C.; Wang, X.; Torrico, D.D. Aroma and quality assessment for vertical vintages using machine learning modelling based on weather and management information. In Proceedings of the 21st GiESCO International Meeting, Thessaloniki, Greece, 23–28 June 2019. [Google Scholar]
  110. Cetó, X.; Gutiérrez-Capitán, M.; Calvo, D.; del Valle, M. Beer classification by means of a potentiometric electronic tongue. Food Chem. 2013, 141, 2533–2540. [Google Scholar] [CrossRef]
  111. Alcázar, Á.; Jurado, J.M.; Palacios-Morillo, A.; de Pablos, F.; Martín, M.J. Recognition of the geographical origin of beer based on support vector machines applied to chemical descriptors. Food Control 2012, 23, 258–262. [Google Scholar] [CrossRef]
  112. Rousu, J.; Elomaa, T.; Aarts, R. Predicting the speed of beer fermentation in laboratory and industrial scale. In Proceedings of the International Work-Conference on Artificial Neural Networks, Alicante, Spain, 2–4 June 1999; pp. 893–901. [Google Scholar]
  113. Santos, J.P.; Lozano, J. Real time detection of beer defects with a hand held electronic nose. In Proceedings of the 2015 10th Spanish Conference on Electron Devices (CDE), Aranjuez-Madrid, Spain, 11–13 February 2015; pp. 1–4. [Google Scholar]
  114. Gardner, J.W.; Bartlett, P.N. A brief history of electronic noses. Sens. Actuators B Chem. 1994, 18, 210–211. [Google Scholar] [CrossRef]
  115. Voss, H.G.J.; Mendes Júnior, J.J.A.; Farinelli, M.E.; Stevan, S.L. A Prototype to Detect the Alcohol Content of Beers Based on an Electronic Nose. Sensors 2019, 19, 2646. [Google Scholar] [CrossRef]
  116. Zhang, Y.; Jia, S.; Zhang, W. Predicting acetic acid content in the final beer using neural networks and support vector machine. J. Inst. Brew. 2012, 118, 361–367. [Google Scholar] [CrossRef]
  117. Yu, H.; Wang, J.; Yao, C.; Zhang, H.; Yu, Y. Quality grade identification of green tea using E-nose by CA and ANN. LWT-Food Sci. Technol. 2008, 41, 1268–1273. [Google Scholar] [CrossRef]
  118. Chen, Q.; Zhao, J.; Chen, Z.; Lin, H.; Zhao, D.-A. Discrimination of green tea quality using the electronic nose technique and the human panel test, comparison of linear and nonlinear classification tools. Sens. Actuators B Chem. 2011, 159, 294–300. [Google Scholar] [CrossRef]
  119. Cimpoiu, C.; Cristea, V.-M.; Hosu, A.; Sandru, M.; Seserman, L. Antioxidant activity prediction and classification of some teas using artificial neural networks. Food Chem. 2011, 127, 1323–1328. [Google Scholar] [CrossRef]
  120. Guo, Z.; Chen, L.; Zhao, C.; Huang, W.; Chen, Q. Nondestructive estimation of total free amino acid in green tea by near infrared spectroscopy and artificial neural networks. In Proceedings of the International Conference on Computer and Computing Technologies in Agriculture, Beijing, China, 29–31 October 2011; pp. 43–53. [Google Scholar]
  121. Zhu, H.; Ye, Y.; He, H.; Dong, C. Evaluation of green tea sensory quality via process characteristics and image information. Food Bioprod. Process. 2017, 102, 116–122. [Google Scholar] [CrossRef]
  122. Messias, J.A.; Melo, E.d.C.; Lacerda Filho, A.F.d.; Braga, J.L.; Cecon, P.R. Determination of the influence of the variation of reducing and non-reducing sugars on coffee quality with use of artificial neural network. Engenharia Agrícola 2012, 32, 354–360. [Google Scholar] [CrossRef]
  123. Domínguez, R.; Moreno-Barón, L.; Muñoz, R.; Gutiérrez, J. Voltammetric electronic tongue and support vector machines for identification of selected features in Mexican coffee. Sensors 2014, 14, 17770–17785. [Google Scholar] [CrossRef]
  124. Romani, S.; Cevoli, C.; Fabbri, A.; Alessandrini, L.; Dalla Rosa, M. Evaluation of coffee roasting degree by using electronic nose and artificial neural network for off-line quality control. J. Food Sci. 2012, 77, C960–C965. [Google Scholar] [CrossRef]
  125. Thazin, Y.; Pobkrut, T.; Kerdcharoen, T. Prediction of acidity levels of fresh roasted coffees using e-nose and artificial neural network. In Proceedings of the 2018 10th International Conference on Knowledge and Smart Technology (KST), Chiangmai, Thailand, 31 January–3 February 2018; pp. 210–215. [Google Scholar]
  126. Bucak, I.O.; Karlik, B. Detection of drinking water quality using CMAC based artificial neural Networks. Ekoloji 2011, 20, 75–81. [Google Scholar] [CrossRef]
  127. Camejo, J.; Pacheco, O.; Guevara, M. Classifier for drinking water quality in real time. In Proceedings of the 2013 International Conference on Computer Applications Technology (ICCAT), Sousse, Tunisia, 20–22 January 2013; pp. 1–5. [Google Scholar]
  128. Chatterjee, S.; Sarkar, S.; Dey, N.; Sen, S.; Goto, T.; Debnath, N.C. Water quality prediction: Multi objective genetic algorithm coupled artificial neural network based approach. In Proceedings of the 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), Emden, Germany, 24–26 July 2017; pp. 963–968. [Google Scholar]
  129. Qiu, S.; Gao, L.; Wang, J. Classification and regression of ELM, LVQ and SVM for E-nose data of strawberry juice. J. Food Eng. 2015, 144, 77–85. [Google Scholar] [CrossRef]
  130. Hong, X.; Wang, J.; Qi, G. E-nose combined with chemometrics to trace tomato-juice quality. J. Food Eng. 2015, 149, 38–43. [Google Scholar] [CrossRef]
  131. Qiu, S.; Wang, J. The prediction of food additives in the fruit juice based on electronic nose with chemometrics. Food Chem. 2017, 230, 208–214. [Google Scholar] [CrossRef] [PubMed]
  132. Nandeshwar, V.J.; Phadke, G.S.; Das, S. Classification of orange juice adulteration using LDA, PCA and ANN. In Proceedings of the 2016 IEEE 1st International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), Delhi, India, 4–6 July 2016; pp. 1–5. [Google Scholar]
  133. Rácz, A.; Bajusz, D.; Fodor, M.; Héberger, K. Comparison of classification methods with “n-class” receiver operating characteristic curves: A case study of energy drinks. Chemom. Intell. Lab. Syst. 2016, 151, 34–43. [Google Scholar] [CrossRef]
  134. Rácz, A.; Héberger, K.; Fodor, M. Quantitative determination and classification of energy drinks using near-infrared spectroscopy. Anal. Bioanal. Chem. 2016, 408, 6403–6411. [Google Scholar] [CrossRef] [PubMed]
  135. Mamat, M.; Samad, S.A. Classification of beverages using electronic nose and machine vision systems. In Proceedings of the 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, Hollywood, CA, USA, 3–6 December 2012; pp. 1–6. [Google Scholar]
  136. Balabin, R.M.; Smirnov, S.V. Melamine detection by mid-and near-infrared (MIR/NIR) spectroscopy: A quick and sensitive method for dairy products analysis including liquid milk, infant formula, and milk powder. Talanta 2011, 85, 562–568. [Google Scholar] [CrossRef]
  137. Jain, A.; Flynn, P.; Ross, A.A. Handbook of Biometrics; Springer: New York, NY, USA, 2007. [Google Scholar]
  138. Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 511, pp. I-511–I-518. [Google Scholar]
  139. McDuff, D.; Mahmoud, A.; Mavadati, M.; Amr, M.; Turcot, J.; Kaliouby, R.e. AFFDEX SDK: A cross-platform real-time multi-face expression recognition toolkit. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3723–3726. [Google Scholar]
  140. Leanne Loijens, O.K.; van Kuilenburg, H.; den Uyl, M.; Ivan, P. FaceReader™ Version 6.1 Reference Manual; Noldus Information Technology b.v: Wageningen, The Netherlands, 2015. [Google Scholar]
  141. Fuentes, S.; Gonzalez Viejo, C.; Torrico, D.; Dunshea, F. Development of a biosensory computer application to assess physiological and emotional responses from sensory panelists. Sensors 2018, 18, 2958. [Google Scholar] [CrossRef]
  142. de Wijk, R.A.; He, W.; Mensink, M.G.; Verhoeven, R.H.; de Graaf, C. ANS responses and facial expressions differentiate between the taste of commercial breakfast drinks. PLoS ONE 2014, 9, e93823. [Google Scholar] [CrossRef]
  143. He, W.; Boesveldt, S.; de Graaf, C.; de Wijk, R.A. Dynamics of autonomic nervous system responses and facial expressions to odors. Front. Psychol. 2014, 5, 110. [Google Scholar] [CrossRef]
  144. Frelih, N.G.; Podlesek, A.; Babič, J.; Geršak, G. Evaluation of psychological effects on human postural stability. Measurement 2017, 98, 186–191. [Google Scholar] [CrossRef]
  145. Jain, M.; Deb, S.; Subramanyam, A. Face video based touchless blood pressure and heart rate estimation. In Proceedings of the 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP), Montreal, QC, Canada, 21–23 September 2016; pp. 1–5. [Google Scholar]
  146. Carvalho, L.; Virani, M.H.; Kutty, M.S. Analysis of Heart Rate Monitoring Using a Webcam. Analysis 2014, 3, 6593–6595. [Google Scholar]
  147. Viejo, C.G.; Fuentes, S.; Torrico, D.D.; Dunshea, F.R. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate. Sensors 2018, 18, 1802. [Google Scholar] [CrossRef] [PubMed]
  148. Torrico, D.D.; Fuentes, S.; Viejo, C.G.; Ashman, H.; Gurr, P.A.; Dunshea, F.R. Analysis of thermochromic label elements and colour transitions using sensory acceptability and eye tracking techniques. LWT 2018, 89, 475–481. [Google Scholar] [CrossRef]
  149. Kamboj, S.K.; Joye, A.; Bisby, J.A.; Das, R.K.; Platt, B.; Curran, H.V. Processing of facial affect in social drinkers: A dose–response study of alcohol using dynamic emotion expressions. Psychopharmacology 2013, 227, 31–39. [Google Scholar] [CrossRef] [PubMed]
  150. Beyts, C.; Chaya, C.; Dehrmann, F.; James, S.; Smart, K.; Hort, J. A comparison of self-reported emotional and implicit responses to aromas in beer. Food Qual. Preference 2017, 59, 68–80. [Google Scholar] [CrossRef]
  151. Garcia-Burgos, D.; Zamora, M.C. Exploring the hedonic and incentive properties in preferences for bitter foods via self-reports, facial expressions and instrumental behaviours. Food Qual. Preference 2015, 39, 73–81. [Google Scholar] [CrossRef]
  152. Danner, L.; Haindl, S.; Joechl, M.; Duerrschmid, K. Facial expressions and autonomous nervous system responses elicited by tasting different juices. Food Res. Int. 2014, 64, 81–90. [Google Scholar] [CrossRef]
  153. Danner, L.; Sidorkina, L.; Joechl, M.; Duerrschmid, K. Make a face! Implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology. Food Qual. Preference 2014, 32, 167–172. [Google Scholar] [CrossRef]
  154. McPherson, S.S. Artificial intelligence: Building Smarter Machines; Twenty-First Century Books: Minneapolis, MN, USA, 2018. [Google Scholar]
  155. Joshi, N. How Far are we from Achieving Artificial General Intelligence? Forbes. 10 June 2019. Available online: https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#2edd17436dc4 (accessed on 17 July 2019).
  156. Mohammadi, V.; Minaei, S. Artificial Intelligence in the Production Process. In Engineering Tools in the Beverage Industry; Elsevier: Amsterdam, The Netherlands, 2019; pp. 27–63. [Google Scholar]
Figure 1. Diagram showing the equipment typically needed for computer vision (CV) analysis, which consists of a camera, the sample, a light-source, and computer software for analysis.
Figure 1. Diagram showing the equipment typically needed for computer vision (CV) analysis, which consists of a camera, the sample, a light-source, and computer software for analysis.
Beverages 05 00062 g001
Figure 2. Types and algorithms for machine learning modeling. Information obtained from Matlab Machine Learning and Deep Learning Toolboxes [89].
Figure 2. Types and algorithms for machine learning modeling. Information obtained from Matlab Machine Learning and Deep Learning Toolboxes [89].
Beverages 05 00062 g002
Figure 3. Diagram showing the techniques used in the face recognition to assess emotions from consumers; the example depicted was obtained using FaceReader™ software.
Figure 3. Diagram showing the techniques used in the face recognition to assess emotions from consumers; the example depicted was obtained using FaceReader™ software.
Beverages 05 00062 g003
Figure 4. Diagram showing the technique used to measure body temperature from consumers during the sensory sessions using an infrared thermal camera and computer vision algorithms capable of recognizing the eye section.
Figure 4. Diagram showing the technique used to measure body temperature from consumers during the sensory sessions using an infrared thermal camera and computer vision algorithms capable of recognizing the eye section.
Beverages 05 00062 g004
Figure 5. Venn diagram showing the relationship between artificial intelligence and other integrated technologies. Those that are outside the main category represent techniques that may function as stand-alone and do not necessarily enter within the artificial intelligence group in all cases.
Figure 5. Venn diagram showing the relationship between artificial intelligence and other integrated technologies. Those that are outside the main category represent techniques that may function as stand-alone and do not necessarily enter within the artificial intelligence group in all cases.
Beverages 05 00062 g005
Back to TopTop