The hardware block diagram is shown in Figure 5
. The central processing unit is a single-board computer (Raspberry Pi (RPi) v3) [35
]. It has a 1.2 GHz 64-bit ARMv8 quad-core microprocessor, 1 GB RAM, micro SD card slot supporting up to 32 GB, LCD interface (DSI), onboard BLE and Wi-Fi module, and other built-in peripheral hardware. Since image processing requires high memory and processing speed [36
], the RPi is selected from other microcontrollers such as Advanced Virtual RISC (AVR) or Peripheral Interface Controller (PIC) that has lower memory and speed. Using the DSI and I2C port, a 7” capacitive touch LCD [37
] is interfaced with the RPi. The LCD has a 24-bit color depth and 800 × 480 pixel screen resolution.
An RPi HD camera module [38
] is attached through a 4 mm hole near the center of the prototype microwave oven roof. The focal length is set to the height of the cavity of the oven which is 20 cm to get a sharper image. A FLIR thermal camera [28
] is mounted near the center of the roof of the oven through a 4-mm hole. The thermal camera is interfaced with the RPi using Serial Peripheral Interface (SPI) protocol. To produce sounds, a speaker [39
] is connected to the RPi board using a 3.5 mm stereo jack. One of the oven’s door’s switch is connected to an RPi interrupt pin, and the other door switch is connected to the AC circuit path so that the current can only flow when the door is closed. A daylight white LED bulb [40
] is connected via relay contacts [41
]. It was covered by stainless steel wire mesh which was connected with ground to shield the LED bulb from the microwave. For microwave generation, a Solid-State Relay (SSR) [42
] is used to toggle power in the magnetron circuit. The RPi board and display are powered by means of a 110V AC to 5.1V DC adapter [43
]. The RPi board produces 3.3 V and 5 V DC, with which the RPi HD camera, FLIR thermal camera, relay and speaker are powered.
In this research, the two cameras are placed on the microwave oven’s roof, where the cameras can sense the light and the heat through two small 4-mm holes. Thus, their bodies are not directly exposed to the microwaves, except for the lenses. The operating temperature of the thermal camera is −10 °C to +80 °C and it can measure temperature from −10 °C to +140 °C [28
]. The RPi camera, used to capture the color image of the food for classification, can operate from −20 °C to 60 °C [38
], a Debian-based Linux operating system, is installed on a 16 GB SD card and inserted in the RPi board. The firmware is built-in Python language and the required packages are installed such as Tensorflow, Keras 2.3.1, PyQT5, PiCamera, PyLepton, Open CV (CV2), RPi. GPIO, threading, subprocess
, etc. Window-based graphical user interface (GUI) is created using Qt Designer [44
] and the program was designed on an event-based signal–slot concept of PyQT5 [45
]. The firmware consists of two layers: the driver layer, and the application layer. The driver layer consists of low-level firmware to handle different peripheral hardware such as camera, relay, SSR, FLIR camera, speaker and door status. Through calling the driver layer functions, the application layer accesses the hardware.
Whenever the door of the microwave is closed, an interrupt is triggered and its callback function is executed. This causes the actions shown in the pseudocode of Figure 6
to be executed. The program captures a color image of 320 × 240 using the RPi camera and then finds out whether the microwave is empty or non-empty (i.e., food inside). To do that, a preexisting image of the microwave oven without any food, Ie
, is compared with the captured image, Ic
. The environment inside the microwave cavity does not change when there is no food. The captured image sometimes may have different brightness levels due to the auto adjustments of the camera. To remove the effect of brightness, the images are first converted to grayscale and then normalized by subtracting the mean from each pixel; thus, the average pixel value becomes zero. Then the two normalized images are subtracted and the mean is calculated by taking the absolute pixel values of the subtracted image. The result is then compared with a threshold to decide the captured image to be empty or non-empty.
If the microwave was empty when the door was closed, then it starts retraining the CNN model, if required, in the background process. Retraining is required whenever a new item was added or the user corrected a misclassified item.
If the microwave has food when the door is closed, the image is classified using the deep learning model as described in Section 3.1.2
. The classify function returns the classified image class number and then the classify
window is displayed. Some screenshots of the GUI are shown in Figure 9. On the classify window, the captured image is displayed. The item name is selected and shown in a combo box that contains the list of all existing items. The target temperature is recommended from the food_temperature
file which contains a dictionary of each item name and its target temperature.
If the item is a new item, then the user presses the ‘New’ button and it opens the new item
window. The user can insert the new item name and its preferred target temperature in that window using an onscreen keyboard. When the ‘OK’ button is pressed, this information is appended in the new
file. Then training and validation images are generated for the new class using data augmentation as discussed in Section 3.1.1
. The flag, isTrainingRequired
, is set indicating the retraining of the model is required. The user then returns to the classify window after pressing the OK button.
If the item is misclassified, then the user can select the correct item name from the combo box in the classify window. The program will then delete some random images from the correct class’s training and validation dataset, and add some new augmented images from the captured image to the dataset. In this way, the model will be trained with new images so that the item will not be misclassified again. The isTrainingRequired flag is also set in this case.
When the ‘Start’ button is pressed in the classify window, the heating
window is shown. The bulb and the cooling fan relay, and the SSR to heat the food is turned on. A repetitive timer with an interval of 1 s is started. Every after 1 s, the callback function of the timer is executed. The actions done in the callback function are shown in the pseudocode in Figure 7
. The average food temperature is calculated and the thermal image is captured according to the discussion on Section 3.2
. The heating goes on until the current food temperature becomes greater than or equals to the target temperature. The heating is also stopped if the user presses the ‘Stop’ button or the door is opened during heating. A beep sound is played when the food was heated to the target temperature. The user can then open the door and take out the food from the microwave oven. When the door is closed, the callback function for the door close event, as shown in Figure 6
, is executed.
If the isTrainingRequired flag was set due to the addition of new item or correcting a misclassified item, and no training is currently going on, then a separate thread runs the code to retrain the CNN model. As the training runs on a separate thread, the normal operation of the microwave can be done while the training is going on in the background. The total number of classes for that model is calculated by adding the items in food_temperature and new_food_temperature files. The CNN model is trained until the validation loss is smaller than 0.25 or for 5000 epochs, whichever reaches first. The training and validation batch size is set to 64 and the number of steps for each epoch is dynamically calculated as the sample size changes with the addition of new items. The learning rate is set to 1e-6. Once the training is done, the model file is saved, the items from the new_food_temperature file are appended to the food_temperature file, and the new_ food_temperature file items are deleted.