Next Article in Journal
Skeleton-Based Activity Recognition for Process-Based Quality Control of Concealed Work via Spatial–Temporal Graph Convolutional Networks
Previous Article in Journal
Detection and Localization of Small Moving Objects in the Presence of Sensor and Platform Movement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Thermal-Adaptation-Behavior-Based Thermal Sensation Evaluation Model with Surveillance Cameras

1
School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China
2
School of Computer Science and Technology, Shandong Jianzhu University, Jinan 250101, China
3
Sohool of Computer Science, Liaocheng University, Liaocheng 252000, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(4), 1219; https://doi.org/10.3390/s24041219
Submission received: 5 January 2024 / Revised: 26 January 2024 / Accepted: 9 February 2024 / Published: 14 February 2024
(This article belongs to the Special Issue Artificial Intelligence and Sensors in Smart Buildings)

Abstract

:
The construction sector is responsible for almost 30% of the world’s total energy consumption, with a significant portion of this energy being used by heating, ventilation and air-conditioning (HVAC) systems to ensure people’s thermal comfort. In practical applications, the conventional approach to HVAC management in buildings typically involves the manual control of temperature setpoints by facility operators. Nevertheless, the implementation of real-time alterations that are based on the thermal comfort levels of humans inside a building has the potential to dramatically improve the energy efficiency of the structure. Therefore, we propose a model for non-intrusive, dynamic inference of occupant thermal comfort based on building indoor surveillance camera data. It is based on a two-stream transformer-augmented adaptive graph convolutional network to identify people’s heat-related adaptive behaviors. The transformer specifically strengthens the original adaptive graph convolution network module, resulting in further improvement to the accuracy of the detection of thermal adaptation behavior. The experiment is conducted on a dataset including 16 distinct temperature adaption behaviors. The findings indicate that the suggested strategy significantly improves the behavior recognition accuracy of the proposed model to 96.56%. The proposed model provides the possibility to realize energy savings and emission reductions in intelligent buildings and dynamic decision making in energy management systems.

1. Introduction

Energy use in buildings is a substantial component of worldwide energy consumption [1,2,3]. Conventional building management systems function based on fixed or unchanging operating schedules. Prior research has shown that a considerable amount of energy is squandered in places that are either underutilized or overutilized [4,5]. Furthermore, the presence of individuals and their utilization of equipment have a role in the internal energy consumption and affect the temperature of the environment. The domain of thermal comfort inference has been extensively researched and has garnered significant interest due to its potential to greatly enhance building energy efficiency while ensuring that occupants are satisfied with their interior settings [6,7,8,9]. Thermal sensation detection algorithms, derived from thermal adaption behaviors seen using a camera, are an innovative and noninvasive means of detecting thermal sensations without the need for further detection equipment [10]. This subject has been extensively researched for many years and continues to be popular because of its significant potential in promoting energy saving in construction. Therefore, it is crucial to accurately assess an individual’s temperature perception in order to effectively regulate air-conditioning systems in real time.
Thermal comfort inference is a challenging task. There have been various attempts to evaluate thermal comfort with thermoregulatory systems, wearable sensors and visual imaging equipment. The most widely used thermoregulatory-system-based thermal comfort model [11] predicts the percentage of satisfied occupants based on the predicted mean vote (PMV) and the predicted percentage of dissatisfied (PPD) occupants; this approach is thus known as the PMV-PPD model. This model evaluates the correlations between human thermal sensation and six parameters: namely, the wind speed, temperature, humidity, longwave radiation, amount of clothing and activity. Although the PMV-PPD model has achieved promising results and has been used extensively in thermal comfort quantification [12,13,14,15], there are still significant constraints, such as the lack of precision in determining individual characteristics, which will lead to the underestimation or overestimation of personalized thermal comfort [16,17]. Furthermore, this model ignores a vital factor, which is the physiological traits of individuals [16,17].
The identification of physiological variables is accomplished via the use of invasive physiological monitoring equipment by thermal comfort perception approaches that make use of wearable sensors. After this, a mathematical model is developed in order to build a link between a number of measures, including blood pressure, skin temperature [18], electroencephalograph readings (which measure the electrical activity of the brain) [19] and heart rate, with the intention of forecasting the thermal sensation of a person. Li et al. designed a smart bracelet device that can detect the wrist skin temperature and its time difference together with the heart rate, which can be used to accurately evaluate the human body’s thermal sensation in different activity states [20]. Reference [21] continuously measured the individual’s electroencephalogram (EEG) signals while wearing an EEG cap, and it used an integrated machine learning method to build a discriminant model to infer the thermal comfort state of the occupants. In light of the fact that the PMV models do not include any uncertainty information, physiological variables continue to be unaffected by the constraints of these models. Such robust representation allows us to obtain more discriminative thermal sensation physiological characteristics of the occupants. Although these methods have achieved promising results, they are limited by user involvement requirements and skin contact requirements. In addition, user acceptance of the sensor configuration is an important factor that affects the promotion and application of a system.
In the realm of computer vision, there are three specific methods for predicting thermal sensations: skin-temperature-based approaches, skin-color-based approaches and posture-estimation-based techniques [22]. The idea that the temperature of the skin may serve as a trustworthy indicator of thermal perception is the foundation upon which the evaluation of skin temperature is built [23,24]. There are a number of studies that have used image processing technologies in order to determine the temperature of the skin in various parts of the body and to then derive the thermal sensation from the skin temperature that was recorded. By comparison to other visible regions of the body, it can be seen that the face is the one that attracts the most attention, particularly the forehead and cheeks, which are the most frequently used areas. Due to the fact that it precisely reflects these feelings, the skin of the face is exceptionally useful for detecting thermal sensations [25,26,27]. Furthermore, scientists have examined the skin temperature of the hand [28,29,30] as well as several other regions of the upper body, such as the elbows and head [31]. In order to provide accurate predictions about thermal comfort, these systems depend on the precision of the skin imaging and detection methods. As a result, deep learning models have been used in research projects that attempt to detect the skin temperature in order to evaluate different parts of the skin. There are several studies that have used OpenPose [32], a computer vision library, in order to accurately recognize elbows and faces in RGB images [33]. Furthermore, in order to determine the temperatures of the faces shown in the thermal images, they used the Haar-based face detector, which is a conventional method of computer vision investigation. However, manually extracting the skin temperature results in a significant decline in the recognition accuracy of an algorithm when dealing with garment insulation and a variable room temperature, which leads to dynamic situations. Moreover, specialized devices, such as thermographic cameras [25,31], are needed to improve the image quality.
Thermal sensation prediction systems use skin pigmentation to extract certain attributes and determine the skin’s temperature. The strategies used in these investigations may be classified as follows. The skin color analysis of the backhand in hand photographs was used to determine the skin temperature in [18,34,35], while the skin color analysis of the face and cheeks was used to evaluate thermoregulation using webcam images in [24,36]. Multiple studies have used a deep learning system to compute the skin temperature by evaluating the hue of the hand’s skin [18,35]. The process of enhancing pre-trained models like Inception and DenseNet201 was realized by using a significant amount of manually captured image data and skin temperature data obtained via the iButton sensor. The Euler video magnification (EVM) approach was used in this study to detect tiny fluctuations in skin color in RGB pictures. The EVM approach improves the detectability of information that is not readily perceptible to the human eye. This methodology was used in all experiments performed in this research [37]. As previously stated, the principles of these approaches depend on reliable skin imaging to predict thermal comfort. Furthermore, the images captured by cameras may be affected by lighting, shadow, color and texture, which all have a significant impact on the thermal comfort inference results.
Another strategy that has gained attention in recent years is the application of comfort postures to infer thermal comfort. This approach involves assessing the thermoregulatory responses related to thermal perception, such as stomping the feet or placing the hands around the neck. Previous studies have shown that engagement in certain activities [38,39] may lead to the anticipation of temperature perception. Usually, these methods adopt the essential anatomical features of an individual’s skeletal system to determine the body’s orientation and assess thermal sensitivity. The research in [38] developed a model that classified 12 distinct comfort positions. This technique relies heavily on precise coordinate position data for crucial joints, which might pose difficulties when using computer cameras. As the precision of evaluating personalized thermal comfort improves, the intrusiveness of the techniques may also rise, hence intensifying the need for human involvement. Specifically, the imaging region must be within the camera’s operational range, posing a difficulty for large and autonomous populations. The positioning of the vital joints is determined by the imaging perspective, the proximity to the object and the behavioral tendencies of the individual. In addition, the computer’s camera is often set up with a restricted field of view, resulting in it only capturing a section of the body. This makes it difficult to obtain a comprehensive and unobstructed image of the whole human body. Furthermore, individuals may not be consistently positioned in front of a computer screen and may sometimes be beyond the camera’s range. To address the limitations of the previously described approaches, a study [40] proposed a thermal adaption behavior detection technique that utilizes spatial–temporal graph convolutional networks (ST-GCNs) [41]. The researchers used OpenPose and spatial–temporal graph convolutional networks to construct a model to identify thermal adaptive behavior. Instead of using RGB images for skeletal tracking, they chose to employ surveillance recordings. However, the empirical evidence suggests that the process of graph generation in ST-GCNs is subject to several restrictions. The skeleton graph used in ST-GCN is pre-determined by heuristic techniques and only captures the physical arrangement of the human body. Hence, this graph does not consistently depict the most optimal choice for action recognition. The interaction between two hands, as shown in activities like ‘clapping’ and ‘reading’, is essential for the recognition of certain categories. Nevertheless, due to the substantial separation of the hands in traditional human-body-oriented networks, ST-GCN has difficulty with precisely capturing their interactions. To improve the responsiveness of the technique, we use a self-attention model called a transformer [42]. This model enhances the sensitivity to tiny behavioral responses.
To address the previously stated limitations, we have developed a personalized thermal comfort model that integrates a transformer, which is named the two-stream transformer-enhanced adaptive graph convolutional network (2S-TAGCN). Specifically, the model backbone adopts the graph convolution network model of a deep neural network and integrates transformers to enhance the network performance. The two-stream information of the human body’s bones and joints is simultaneously input into the neural network. Figure 1 shows the envisioned framework for occupant-behavior-based HVAC control. This model employs surveillance cameras to continuously monitor physiological signals, such as temperature adaptation activities, over prolonged durations in real-world settings. The recognition results of thermal adaptation behavior are achieved through two steps: human posture estimation and behavior recognition. Then, the corresponding thermal sensory state is obtained based on the analysis of the occupant’s thermal adaptation behavior. Finally, the thermal sensation is converted into expected temperature regulation signals (such as heating or cooling) and fed back to the indoor HVAC control system to form closed-loop control. Personalized comfort models use customized data rather than aggregated data to predict thermal comfort. This approach allows for the understanding of each occupant’s distinct comfort needs and preferences and supports the delivery of a customized thermal comfort level accordingly. Utilizing personalized comfort models, a building system may create optimal conditions to improve thermal comfort and energy efficiency. Customized models have the capability to adapt and include new data acquired from intelligent buildings, hence allowing the model to evolve and improve. The main contributions of this study are summarized as follows.
  • A novel two-stream transformer-enhanced adaptive graph convolutional network for thermal adaptation behavior recognition employing a self-attention mechanism on the spatial graph convolution dimensions is developed.
  • A spatial self-attention module based on a transformer is introduced to evaluate the connections and correlations between bone joints in each video frame.
  • On thermal adaptation action recognition datasets, the performance of the proposed 2S-TAGCN method significantly exceeds that of state-of-the-art methods.
The subsequent sections of this work are structured in the following manner. A comparison of similar works is presented in Section 2. Within Section 3, a full analysis of the processes and procedures that are covered is provided. Section 4 presents the materials and analyzes the experimental outcomes. Lastly, we provide the discussion and concluding thoughts in Section 5 and Section 6, respectively.

2. Related Work

2.1. Skeleton-Based Thermal Adaptive Action Recognition

Sensing research is in its early stages at present and focuses on using thermal adaptation behavior to recognize actions by analyzing skeleton-based information. The research in this field may be divided into two categories: manually constructed algorithms that rely on features [38,39] and algorithms that use spatial–temporal graph convolutional networks [40]. Algorithms that depend on manually designed features obtain visual input using computer cameras or Kinect sensor devices. However, these gadgets possess restricted versatility with regard to various scenarios and have a poor level of resilience. Graph convolutional neural network approaches use surveillance cameras to collect behavioral data in a noninvasive way and generate databases of temperature adaptation behavior. This improves the usage of deep convolutional networks in this field. The skeletal data are encoded as a graph structure using a spatial–temporal graph convolutional network. This eliminates the need for manual allocation and traversal requirements, resulting in a faster pace compared to previous techniques. However, the process of generating graphs in ST-GCNs has many limitations. The skeleton graph used in ST-GCN is pre-determined by heuristic techniques and only depicts the physical arrangement of the human body. Hence, there is no guarantee that this graph is the optimal choice for action recognition tasks. An example of the importance of the link between the hand and the head is seen in understanding activities such as ‘head scratch’ and ‘put on a hat’. However, because of the specific positioning of the hands and head in human-body-based graphs, ST-GCN has difficulty with accurately capturing the interconnections among these body components. In addition, the model does not take into account second-order information, such as bone locations and orientation. To address this problem, several studies [43,44] have proposed research initiatives to improve the accuracy of bone-based behavior recognition models.

2.2. Transformers in Computer Vision

There have been a number of applications for transformer self-attention modules in the field of computer vision. Some of these applications include video classification [45,46], image convolution [47], image captioning [48], object detection [49], segmentation [50], multimodal tasks [51], generative modeling [52] and action recognition [43]. The capacity of the transformer to properly capture complex interactions in both spatial and temporal dimensions is essential to its excellent achievements in the previously mentioned tasks. On the basis of the input feature, action recognition strategies that make use of transformer self-attention operators may be divided into two primary categories: networks that make use of RGB features and networks that depend on skeleton-based action recognition. Regarding normal image convolutions, the vision transformer (ViT) [46] is a model that makes use of transformers. The fact that it is based on RGB characteristics demonstrates how effective transformers are in this domain. The first effort to apply a transformer model for skeleton-based action recognition used a spatial–temporal graph convolutional network (ST-TR) [43]. The transformer self-attention operator is used in this method in order to illustrate the reliance that exists between joints. This study’s primary purpose is to improve the efficiency of an adaptive graph convolutional network by including a transformer self-attention operator. This will be accomplished by incorporating the operator. Through the acquisition of knowledge about dynamic skeletal information, the objective is to improve the accuracy of recognizing thermal adaption behavior among humans.
The transformer generates new body joint embeddings by comparing pairs of joints and combining their embeddings based on how relevant each joint is to other joints. A self-attention mechanism allows better features to be extracted from each joint by accumulating clues from the surrounding joints, dynamically creating relations within and between human-body-based graphs.

3. Transformer-Augmented Adaptive Graph Convolutional Network

The purpose of this study is to increase the accuracy of recognizing thermal adaption behavior by establishing a two-stream adaptive graph convolution network with a transformer. Within this section, we provide a framework for the identification of thermal adaption behavior. This study builds on earlier research in this area. An in-depth discussion of 2S-TAGCN, which is a dual-stream transformer-augmented adaptive graph convolutional network, may be found in this section. The fundamental elements of this framework consist of an adaptive graph convolutional network (AGCN) and a transformer-augmented adaptive graph convolutional network (TA-GCN), which are detailed in Section 3.2 and Section 3.3, correspondingly. The article provides a comprehensive explanation of the transformer self-attention modules as well as AGCN.

3.1. Graph Construction

In earlier iterations of skeleton-based action detection challenges, the raw skeletal data were represented as a sequential arrangement of vectors. This was the case in this particular task. Each vector specified the coordinates of the related human joint in either two or three dimensions. An extended motion consisted of a large number of frames, each of which was the duration of a distinct sample. This particular method of combining data is not appropriate for the accurate portrayal of information regarding motion since it is not adequate. An example of a spatiotemporal graph is used by the ST-GCN in order to illustrate the structural information that is shared by these junctions at both the spatial and temporal levels. In addition, a two-stream adaptive graph convolutional network (2S-AGCN) [44] has shown that bone lengths and orientations are often more informative and discriminative with regard to the recognition of activities. With regard to the skeleton dataset, information about the bones has been used as extra data. In light of this, the bone information presented in this research is consistent with the framework that was established by the 2S-AGCN model. A schematic that is referred to as the spatiotemporal skeleton diagram can be seen on the left side of Figure 2. This picture represents the joints as vertices, and the edges represent the spatial connections that exist between the joints inside the human body. A collection of nodes is considered and is denoted as G = ( V , E ) , where V = { v t i | t = 1 , , T , i = 1 , N } represents the set of all nodes v t i of the graph and E S = { ( v t i , v t j ) | i , j = 1 , , N , t = 1 , , T } represents the set of all connections between nodes in a frame (the orange lines in Figure 2, left). The temporal edges are denoted as E T = { ( v t i , v ( t + 1 ) j ) | i , j = 1 , , N , t = 1 , , T } , while the value of t can range from 1 to T. The connection between major articulations in successive phases is shown by these pairs.
Video frames are consecutively arranged along the temporal axis. Afterwards, the coordinate vector of each joint is transferred to the corresponding vertex as an attribute. This process is repeated for each joint. Figure 2 (right) illustrates the bone graph, which depicts the lengths and orientations of bones as a vector that extends from their beginning joint and extends towards their ultimate joint. This vector originates from the first joint of the bone. Let us consider a situation in which we have a bone that is described by an initial joint v 1 = ( x 1 , y 1 , z 1 ) and a target joint v 2 = ( x 2 , y 2 , z 2 ) ; the bone’s vector is e v 1 , v 2 = ( x 2 x 1 , y 2 y 1 , z 2 z 1 ) . Therefore, each bone performs a specific purpose in the body. On the other hand, there is an imbalance in the number of bones and joints since the central joint does not have a bone that corresponds to it. For the purpose of ensuring that the input data are consistent throughout, this approach requires the incorporation of a null bone at the pivotal joint that has a numerical value representing zero. Because of this, the size of the joint is directly proportional to the form of the bone.

3.2. Adaptive Graph Convolutional Network

A number of spatiotemporal adaptive graph convolutions are performed on the graph in order to extract high-level properties. The bone and joint graph that was described previously is used in this process. After this, the attributes that were gathered are used in order to apply a global average pooling layer and a softmax classifier in order to make a prediction about the action category. In addition, the topology of the graph, as well as other network characteristics, is optimized via an all-encompassing learning process that stretches from the beginning to the end. A major improvement to the flexibility of the model is brought about by the fact that the graph of each layer and sample is separate. The residual branch ensures the stability of the initial model.
In the spatial dimension, the graph convolution formulation can be summarized as [44]
f o u t = k K v W k f i n ( A k + B k + C k )
where f o u t denotes the feature map, which is a C × T × N tensor, where C is the number of channels, T is the temporal length, and N is the number of joints. K v denotes the kernel size of the spatial dimension. Based on the partition strategy designed in [41], K v is set to three. Figure 2 depicts the procedure, with the right image representing it. The green circle serves as a symbol for the centroid of the skeleton. The set of joint points within the region enclosed by the curve can be divided into three subsets: a centripetal subset, which includes neighboring vertices located close to the center of gravity; a centrifugal subset, consisting of neighboring vertices located far from the center of gravity; and the vertex itself. The variable W k represents a weight vector that may be taught and serves as an indicator of the weighting function. A k represents the initial standardized N × N adjacency matrix that illustrates the anatomical structure of the human body. The matrix B k is an N × N adjacency matrix. The acquisition of this matrix takes place throughout the whole training phase, indicating that the graph is exclusively trained using the training data. The matrix components represent both the existence of connections between two joints and the strength of these connections. C k is a graph that dynamically adjusts to the data and creates a unique network for each individual sample. It uses a standardized embedded Gaussian function to determine the correlation between two vertices.

3.3. Transformer-Augmented Adaptive Graph Convolutional Layer

The computation of the adaptive graph convolution for the provided skeleton data entails the comparison of adjacent nodes. Nevertheless, this method may not be the most efficient for detecting the heat adaptation behavior of occupants. The categories ‘Scratch one’s head’ and ‘Blow into one’s hand’ require a more profound connection between the hands and the brain, suggesting that the graph structure should be dependent on the data. To address this issue, we propose including a graph convolutional layer that leverages transformer self-attention. This layer has the capacity to independently identify interrelated connections that are crucial for effectively predicting the current action. The kernel values are predicted in a dynamic manner utilizing the identified interconnected relationships, similar to the process of graph convolution. At the sequence level, an identical approach is used to evaluate the alterations to every joint throughout an action and to construct comprehensive linkages that include multiple frames. The resulting operator may create dynamic representations that include spatial dimensions and temporal dimensions.
Moreover, this operator is specifically intended to function as a residual branch, ensuring the stability of the original model. Equation (1) determines the configuration of the adaptive graph based on A k , B k and C K . A k evaluates the existence of connections between two joints, B k measures the strength of these connections, and C K verifies the presence of linkages between two vertices. To ensure that the graph structure is flexible and includes all global joint interactions, we reformulate Equation (1) as
f o u t = k K v W k f i n ( A k + B k + C k ) + Z
The supplementary element, labeled Z, is obtained by computing the correlations between every pair of joints in each frame separately. Within each frame, self-attention is used to extract the low-level information that signifies the interactions between different bodily components. These data are used to ascertain the presence of a correlation between two joints and to measure the magnitude of this association. The approach, also known as the scaled dot-product attention technique, may be defined as follows:
z i t = j s o f t m a x ( α i j t d k ) v j t
where z i t R C o u t (where C o u t is the number of output channels) is the new embedding of node v t i . We use this method to measure the similarity of two joints in the embedding space. In detail, for each joint (vertex) embedding w i W = { w 1 w n } , a query q R d q , a key k R d k and a value vector v R d v are computed independently by trainable linear transformations. Then, a score for each joint embedding is obtained by taking the dot product α i j = q i · k j T , i , j = { 1 , , n } , where n is the total number of joints considered. This score indicates how relevant joint j is to joint i.
Figure 3 shows the overall architecture of the data flow of the adaptive graph convolution layer. Let A k , B k and C k be the variables introduced in Equation (2). The weighting function is denoted as W k , with its parameter represented by W k . The variable z is the similarity measure between bone locations, as stated in Equation (3). The kernel size of the convolution is denoted as ( 1 × 1 ) . K v represents the cardinality of the subsets. The symbol ⨁ represents the operation of adding corresponding elements together. The symbol ⨂ represents the operation of matrix multiplication.
The interconnections between nodes, represented by the α i j t scores, are dynamically forecast, as seen in Figure 4. Consequently, the correlation structure of the skeleton is not constant throughout all actions, but rather, it is adjusted flexibly for each individual sample. The approach works in a manner similar to a graph convolution on a network that is completely linked. However, in this instance, the values of the kernel (i.e., the α i j t scores) are predicted dynamically based on the skeleton position.
Given the input feature map f i n obtained from a frame at time t with a size of C i n × T × N , we first reshape the map into X V , which has a size of T × C i n × N . The T dimension is included inside the batch dimension, enabling the sharing of parameters throughout the temporal dimension while performing distinct modifications on each frame. The technique, referred to as scaled dot-product attention, incorporates a softmax operation and may be computed in matrix form as follows:
h e a d h ( X V ) = S o f t m a x ( ( X V W q ) ( X V W k ) T d k h ) ( X V W v )
where the parameters W q R C i n × N h × d q h , W k R C i n × N h × d k h and W v R C i n × N h × d v h are shared across all nodes. Their product yields Q R T × N h × D q h × N , K R T × N h × D k h × N and V R T × N h × D v h × N . To improve the performance of the algorithm, a process known as multihead attention is commonly used. This process involves applying attention, i.e., a head, numerous times with various learnable parameters and combining the results. The outputs of all heads are then concatenated and projected as follows:
Z = S e l f A t t e n t i o n V = C o n c a t ( h e a d 1 , , h e a d N h ) W o
where N h is the number of heads and W o is a learnable linear transformation that can be combined with the head output. The embedding extraction technique is iterated N h times, with each iteration using a unique set of trainable parameters, in order to accomplish multihead attention. Subsequently, the output undergoes transformation R ( C o u t × T × N ) .
Rather than directly replacing A k , B k and C k with Z, we introduce Z into the algorithm. As a result, the flexibility of the model can be improved without sacrificing the initial performance.

3.4. Attention-Augmented Adaptive Graph Convolutional Network

The network has units that perform spatial–temporal graph convolution operations. Every unit integrates convolutions in both the spatial and temporal dimensions, as seen in Figure 4. Attention augmentation is used to amplify the spatial dimension at the skeleton level. The output of the spatial GCN is sent via a batch normalization (BN) layer and a rectified linear unit (ReLU) layer. The temporal convolution is performed in the same manner as in the ST-GCN model and uses a kernel size with a K t × 1 format on the C × T × N feature maps.
The output of the temporal graph convolutional network (GCN) is then sent via a batch normalization layer and a rectified linear unit layer. To address the issue of overfitting, we include a dropout layer with a dropout rate of 0.5 . As stated before, a residual connection is used to guarantee the stability of the training process.
The attention-augmented adaptive graph convolutional model is composed of nine units in each stream, with channel sizes of 64, 64, 64, 128, 128, 128 and 256. An input data standardization process is implemented via the use of a data BN layer. Ultimately, a global average pooling layer of dimensions is added before the softmax classifier to ensure that feature maps of different sizes from different samples are standardized. After performing these steps, the graph generates feature maps at a higher level, which are then categorized into the appropriate action category using a standard softmax classifier.

3.5. Two-Stream Network

The two-stream network design is identical to that of the 2S-AGCN. In addition, we use bone data to improve the detection of thermal adaptation actions. As stated in Section 3.1, the graph representing the bones is identical to the graph representing the joints. Therefore, the bone network may be constructed using the joint network as a basis. To differentiate between the joint and bone networks, we utilize J-stream and B-stream, respectively. Figure 5 displays the comprehensive architecture. Specifically, the joint data and bone data are first sent into the matching network to acquire the recognition outcomes. Subsequently, the outcomes are combined with the softmax classifier to ascertain the ultimate behavior label.

4. Model Evaluation

Experiments were conducted to ascertain the viability of the proposed methodology. We were specifically interested in ascertaining whether the suggested technique could be used to analyze thermal adaption behavior by comparing the deduced activities at different temperatures as well as whether the deduced states might serve as indicators for the forecasting of thermal comfort.
In this section, the dataset on which the experiments were carried out is first introduced. Section 4.2 shows the parameters and details for network training in the experiment. The final subsection presents the results of the experiments.

4.1. Database

A dataset including films from thermal comfort research initiatives was uploaded; we specifically focused on thermal adaptation actions (TAAs) [40]. The dataset had around 14,800 movies that are deemed legitimate and that showcase 16 distinct temperature adaptation behaviors. Specifically, adaptive behaviors for occupants experiencing heat included ‘Fan self with an object’, ‘Fan self with hands’, ‘Fan self with one’s shirt’, ‘Roll up sleeves’, ‘Wipe perspiration’ and ‘Scratch head’, ‘Take off a jacket’, and ‘Take off a hat/cap’. The adaptive behaviors of occupants who felt cold included ‘Sneeze/cough’, ‘Stamp one’s feet’, ‘Rub one’s hands’, ‘Blow into one’s hands’, ‘Cross one’s arms’, ‘Narrow one’s shoulders’, ‘Put on a jacket’, and ‘Put on a hat/cap’. Each video had a length ranging from 5 to 10 s. The movies, captured by Kinect sensors and security cameras, underwent a resizing process, resulting in a resolution of 340 × 256 . Additionally, the frame rate was adjusted to 30 frames per second (FPS). Throughout the training process, we used OpenPose [32] to extract unprocessed skeletal information and construct datasets exclusively composed of skeletal data. Each vector denoted the 2D coordinates of the relevant human joint.

4.2. Experimental Settings

All experiments used the PyTorch deep learning framework. The optimization technique used was stochastic gradient descent (SGD) with Nesterov momentum set to 0.9. The total number of batches was 64. The loss function used for the gradient backpropagation algorithm was cross-entropy. The initial learning rate was set to 0.1 and then decreased by a factor of 10 at epochs 60 and 90. These settings were selected based on their demonstrated efficacy for producing favorable outcomes on the ST-GCN network.
We used AGCN modules to adaptively raise the learning rate linearly during the first epoch. In order to mitigate overfitting, we used a dropout technique to regularize the attention weights in the transformer network. This approach entails the random elimination of columns from the attention logit matrix. The multihead attention mechanism was configured with 8 heads in all of the experiments, and the d q , d k and d v embedding dimensions were set to 0.25 × C o u t in each layer. A grid search was not applied to these parameters. In terms of model design, each stream included 9 layers with channel dimensions of 64, 64, 64, 128, 128, 128, 256, 256, 256 and 256. Before the softmax classifier was applied, the input coordinates were batch-normalized, a global average pooling layer was applied, and each stream was trained using the conventional cross-entropy loss.

4.3. Results

As stated in Section 3.3, the adaptive graph convolutional block has four types of graphs: A, B, C and T. We manually excluded one of the graphs and present the resulting performance in Table 1. This table demonstrates the usefulness of adaptively acquiring knowledge about the graph for action recognition. It also highlights the negative impact on performance when any one of the three graphs is removed. Another notable advancement is the use of second-order data. This section provides a performance comparison between Js-AGCN and Bs-AGCN for each of the input data types, as shown in Table 2. The results also illustrate the influence of using various types of input data in the 2S-AGCN model.
Another significant advantage is the use of second-order information. In this section, we evaluate the performance of various input data formats (Js-AGCN and Bs-AGCN in Table 2) and compare it to the performance achieved when the data types are combined (represented by 2S-AGCN in Table 2), as described in Section 3.5. The use of the two-stream strategy yields superior results compared to the one-stream solutions.
When compared with the ST-GCN model, the 2S-GCN model shows significant advantages for most thermal adaptation behaviors, especially those with smaller motion ranges, such as shoulder rolling, arm holding, stomping and head scratching, as shown in Table 3. In addition, the ST-GCN model has the lowest accuracy when identifying heat and cold discomfort behaviors of sweat wiping and shoulder pulling. The performance of these two behaviors reaches 70.21% and 71.28%, respectively, and the 2S-TAGCN model improves the recognition accuracy of these two behaviors to 72.74% and 93.62%, respectively.
The precision of sweat wiping was only 72%. However, the experimental results suggest that the ST-GCN model did not achieve a high level of accuracy at identifying this behavior. This can be attributed to the intricate nature of the movement and the substantial diversity of behavior among individuals. Figure 6 displays a visual depiction of the recognition outcomes of the proposed method for different thermal adaptation behaviors in surveillance videos. Figure 6a presents the key frames that depict the action of ‘Taking off a jacket’, while Figure 6b showcases the key frames that illustrate the action of ‘Put on a hat/cap’.
The precision of thermal sensations was assessed using the model established in our prior investigation. The probabilities of thermal comfort predictions P T were computed using the equation. The user’s text consists of the number (6) enclosed in parentheses:
P T = P ( C | A i ) × P a c c u r a c y
where P a c c u r a c y is the behavior recognition accuracy, which is in line with previous work.
The precision of forecasting thermal perception is contingent upon the precision of behavior recognition quantified and the probability of a certain behavioral action being linked to thermal sensation. The probability of a certain behavioral reaction linked to the perception of temperature is consistent. However, the accuracy of anticipating thermal sensation varies depending on the precision of behavior recognition. The experimental results indicate a substantial improvement in the accuracy of identifying behavior, resulting in a comparable increase in the accuracy of prediction.

5. Discussion

The data presented in Table 2 show that the average accuracy of the proposed method at identifying 16 thermal adaptation behaviors reaches 96.56%. This means that the occupant data acquired by the surveillance cameras as sensors can be used to effectively analyze and infer the thermal adaptation behavior of the occupants in the room. This model realizes the process of non-invasive identification of thermal adaptation behavior without the need to install special equipment in the scene, and the occupants do not need to wear detection equipment.
The aim of this study was to identify 16 thermal adaptation behaviors related to occupants’ thermal comfort. Table 3 shows the improvements to the accuracy of each behavior’s recognition compared with the ST-GCN method. The thermal adaptation behavior with the lowest recognition accuracy is ‘Wipe perspiration’, which is only 72.74%. The highest behavior recognition is reflected in ‘Take off a jacket’ and ‘Cross one’s arms’, where it reaches more than 99%. This reflects the differences in the sensitivity of the proposed methods to different actions. Future work will consider the need to design more appropriate strategies for the classification of specific thermal sensory behaviors. On the other hand, the scale and diversity of data can improve the generalization capabilities of the model. Therefore, in our next work, we will consider exploring more visual behaviors related to human thermal sensation while increasing the size and diversity of the dataset.
When using cameras to map occupants’ minute movements, it is worth considering potential privacy violations. The main aim of the study was to explore a possible method of optimizing thermal sensitivity and reducing energy use that could be deployed in private spaces with the permission of the occupants. In order to avoid privacy issues as much as possible, the method proposed in this article only infers information based on the skeletal characteristics of indoor occupants, and it does not collect private information such as the appearance, clothing, gender, etc. of the occupants through cameras. This means that the model only focuses on the skeletal motion that occurs and is not associated with the object on which the motion occurs. Finally, the final output of the model is only the thermal sensation, which is directly converted into input information for the temperature control module of the HVAC system in subsequent applications. Therefore, the identification process does not involve any personal information.
The main limitation of this study is that, although it achieved high accuracy at identifying 16 types of thermal adaptation behaviors, the model has a huge number of parameters and high complexity. In future work, further pruning and optimization of the model will be considered to enable the solution to be integrated into existing terminal equipment for the inference of thermal comfort behavior. Integrating the thermal comfort identification algorithm into the existing monitoring equipment in a building can effectively improve the thermal comfort of the personnel in the building while dynamically regulating the air-conditioning system to reduce the building’s energy consumption.

6. Conclusions

The accurate inference of the thermal comfort of building occupants can enable buildings to dynamically manage HVAC systems to save energy and reduce emissions. Based on building indoor surveillance camera data, this paper proposes a non-contact method of inferring the thermal comfort of the occupants in a building, which is expected to minimize energy consumption while ensuring the comfort of the occupants. The skeletal data in this model are organized in a graph structure, which is parameterized and incorporated into a network that can be both taught and updated. The model integrates both primary and secondary data from bone measurements, emphasizing the importance of the bone orientation and length in motion detection models based on bone characteristics. The model underwent a comprehensive evaluation of its ability to respond to heat, with a specific focus on the thermal adaptation behavior dataset, and the validity of the model in recognizing thermal adaptation behaviors was verified.
The main contributions of this article include the following: (1) An adaptive graph convolutional network based on a transformer that can adapt to different action recognition tasks and thermal adaptation behaviors by learning the topological structure of graphs in different GCN layers and skeleton samples in an end-to-end manner is suggested. (2) The second-order information from the skeleton data is articulated and coupled with the first-order information, considerably improving the recognition performance. (3) When compared with existing models, the accuracy of the 2S-TAGCN model for recognizing skeleton-based thermal adaptation behaviors in large-scale datasets is greatly improved.
By employing indoor monitoring equipment and deep learning algorithms to identify the thermal adaptation behavior of occupants, we can effectively monitor and predict their thermal comfort, thereby enabling intelligent adjustments to the air-conditioning system’s operating status. This approach not only offers the potential to reduce buildings’ energy consumption but also enhances the personalized thermal comfort of occupants within the indoor environment.

Author Contributions

Writing—original draft preparation, Y.W.; software, Y.W.; methodology, W.D.; formal analysis, J.L.; validation, D.S.; writing—review and editing, P.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation of China (grant Nos. 62073201 and 62173216).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data has not been made public due to privacy restrictions. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to acknowledge School of Information Science and Engineering, Shandong Normal University for equipment support.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

matrix addition
matrix multiplication
Gthe graph representation of the human body
Vthe set of all nodes
E S connections between nodes
E T temporal edges
Cthe number of channels
Tthe total number of nodes
A k joint connection matrix
B k connection strength matrix
C k vertices connection matrix
Zthe correlations between joints
N h the number of heads
W o the learnable linear transformation
W k the weight matrix
K v the kernel size of the spatial dimension
K t the kernel size of the temporal dimension
C i n the number of input channels
C o u t the number of output channels
f i n input feature map
f o u t output feature map
ReLUrectified linear unit
ST-GCNspatial–temporal graph convolutional network
AGCNadaptive graph convolutional network
2S-AGCNtwo-stream adaptive graph convolutional network
2S-TAGCNtwo-stream transformer-enhanced adaptive graph convolutional network
TA-GCNtransformer-augmented adaptive graph convolutional network
TAAsthermal adaptation actions
FPSframes per second
BNbatch normalization
SGDstochastic gradient descent
J-streamthe joint network
B-streamthe bone network

References

  1. Allouhi, A.; El Fouih, Y.; Kousksou, T.; Jamil, A.; Zeraouli, Y.; Mourad, Y. Energy consumption and efficiency in buildings: Current status and future trends. J. Clean. Prod. 2015, 109, 118–130. [Google Scholar] [CrossRef]
  2. United Nations Environment Programme. 2020 Global Status Report for Buildings and Construction: Towards a Zero-Emission, Efficient and Resilient Buildings and Construction Sector; Global Alliance for Buildings and Construction: Nairobi, Kenya, 2020. [Google Scholar]
  3. Olu-Ajayi, R.; Alaka, H.; Sulaimon, I.; Sunmola, F.; Ajayi, S. Building energy consumption prediction for residential buildings using deep learning and other machine learning techniques. J. Build. Eng. 2022, 45, 103406. [Google Scholar] [CrossRef]
  4. Ma, Z.; Zhao, D.; She, C.; Yang, Y.; Yang, R. Personal thermal management techniques for thermal comfort and building energy saving. Mater. Today Phys. 2021, 20, 100465. [Google Scholar] [CrossRef]
  5. Yang, B.; Liu, Y.; Liu, P.; Wang, F.; Cheng, X.; Lv, Z. A novel occupant-centric stratum ventilation system using computer vision: Occupant detection, thermal comfort, air quality, and energy savings. Build. Environ. 2023, 237, 110332. [Google Scholar] [CrossRef]
  6. Li, W.; Zhang, J.; Zhao, T. Indoor thermal environment optimal control for thermal comfort and energy saving based on online monitoring of thermal sensation. Energy Build. 2019, 197, 57–67. [Google Scholar] [CrossRef]
  7. Lopez, G.; Aoki, T.; Nkurikiyeyezu, K.; Yokokubo, A. Model for thermal comfort and energy saving based on individual sensation estimation. arXiv 2020, arXiv:2003.04311. [Google Scholar] [CrossRef]
  8. Li, W.; Zhang, J.; Zhao, T.; Ren, J. Experimental study of an indoor temperature fuzzy control method for thermal comfort and energy saving using wristband device. Build. Environ. 2021, 187, 107432. [Google Scholar] [CrossRef]
  9. Kong, X.; Chang, Y.; Li, N.; Li, H.; Li, W. Comparison study of thermal comfort and energy saving under eight different ventilation modes for space heating. In Proceedings of the Building Simulation; Springer: Berlin/Heidelberg, Germany, 2022; Volume 15, pp. 1323–1337. [Google Scholar]
  10. Qi, Y.; Wang, R.; Zhao, C.; Ding, C.; Du, C.; Zhang, J.; Zhang, X.; Chen, X.; Zhang, M.; Bie, Q.; et al. A personalized regression model for predicting thermal sensation based on local skin temperature in moderate summer conditions. Energy Build. 2023, 301, 113719. [Google Scholar] [CrossRef]
  11. Fanger, P.O. THermal Comfort. Analysis and Applications in Environmental Engineering; Danish Technical Press: Copenhagen, Denmark, 1970. [Google Scholar]
  12. Zhang, Y.; Zhao, R. Relationship between thermal sensation and comfort in non-uniform and dynamic environments. Build. Environ. 2009, 44, 1386–1391. [Google Scholar] [CrossRef]
  13. Nicol, F.; Humphreys, M.; Roaf, S. Adaptive Thermal Comfort: Principles and Practice; Routledge: Abingdon, UK, 2012. [Google Scholar]
  14. ANSI/ASHRAE Standard 55-2013; Thermal Environmental Conditions for Human Occupancy. American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.: Atlanta, GA, USA, 2013.
  15. Singh, M.K.; Ooka, R.; Rijal, H.B.; Kumar, S.; Kumar, A.; Mahapatra, S. Progress in thermal comfort studies in classrooms over last 50 years and way forward. Energy Build. 2019, 188, 149–174. [Google Scholar] [CrossRef]
  16. Liu, S.; Schiavon, S.; Das, H.P.; Jin, M.; Spanos, C.J. Personal thermal comfort models with wearable sensors. Build. Environ. 2019, 162, 106281.1–106281.17. [Google Scholar] [CrossRef]
  17. Wang, J.; Wang, Z.; de Dear, R.; Luo, M.; Ghahramani, A.; Lin, B. The uncertainty of subjective thermal comfort measurement. Energy Build. 2018, 181, 38–49. [Google Scholar] [CrossRef]
  18. Cheng, X.; Yang, B.; Hedman, A.; Olofsson, T.; Li, H.; Van Gool, L. NIDL: A pilot study of contactless measurement of skin temperature for intelligent building. Energy Build. 2019, 198, 340–352. [Google Scholar] [CrossRef]
  19. Kim, Y.; Han, J.; Chun, C. Evaluation of comfort in subway stations via electroencephalography measurements in field experiments. Build. Environ. 2020, 183, 107130. [Google Scholar] [CrossRef]
  20. Li, W.; Zhang, J.; Zhao, T.; Liang, R. Experimental research of online monitoring and evaluation method of human thermal sensation in different active states based on wristband device. Energy Build. 2018, 173, 613–622. [Google Scholar] [CrossRef]
  21. Wu, M.; Li, H.; Qi, H. Using electroencephalogram to continuously discriminate feelings of personal thermal comfort between uncomfortably hot and comfortable environments. Indoor Air 2020, 30, 534–543. [Google Scholar] [CrossRef]
  22. Choi, H.; Um, C.Y.; Kang, K.; Kim, H.; Kim, T. Review of vision-based occupant information sensing systems for occupant-centric control. Build. Environ. 2021, 203, 108064. [Google Scholar] [CrossRef]
  23. Cosma, A.C.; Simha, R. Using the contrast within a single face heat map to assess personal thermal comfort. Build. Environ. 2019, 160, 106163. [Google Scholar] [CrossRef]
  24. Jazizadeh, F.; Jung, W. Personalized thermal comfort inference using RGB video images for distributed HVAC control. Appl. Energy 2018, 220, 829–841. [Google Scholar] [CrossRef]
  25. Li, D.; Menassa, C.C.; Kamat, V.R. Robust non-intrusive interpretation of occupant thermal comfort in built environments with low-cost networked thermal cameras. Appl. Energy 2019, 251, 113336. [Google Scholar] [CrossRef]
  26. Li, D.; Menassa, C.C.; Kamat, V.R. Non-intrusive interpretation of human thermal comfort through analysis of facial infrared thermography. Energy Build. 2018, 176, 246–261. [Google Scholar] [CrossRef]
  27. Metzmacher, H.; Wölki, D.; Schmidt, C.; Frisch, J.; van Treeck, C. Real-time human skin temperature analysis using thermal image recognition for thermal comfort assessment. Energy Build. 2018, 158, 1063–1078. [Google Scholar] [CrossRef]
  28. Vesely, M.; Cieszczyk, A.; Zhao, Y.; Zeiler, W. Low cost infrared array as a thermal comfort sensor. In Proceedings of the International Conference CISBAT 2015 Future Buildings and Districts Sustainability from Nano to Urban Scale, LESO-PB, EPFL, Number CONF, Lausanne, Switzerland, 9–11 September 2015; pp. 393–398. [Google Scholar]
  29. Ranjan, J.; Scott, J. ThermalSense: Determining dynamic thermal comfort preferences using thermographic imaging. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1212–1222. [Google Scholar]
  30. Vissers, D.; Zeiler, W. The user as sensor to reach for optimal individual comfort and reduced energy consumption. In Proceedings of the PLEA 2012: Towards An Environmentally Responsible Architecture, Lima, Peru, 7–9 November 2012. [Google Scholar]
  31. Cosma, A.C.; Simha, R. Thermal comfort modeling in transient conditions using real-time local body temperature extraction with a thermographic camera. Build. Environ. 2018, 143, 36–47. [Google Scholar] [CrossRef]
  32. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  33. Cosma, A.C.; Simha, R. Machine learning method for real-time non-invasive prediction of individual thermal preference in transient conditions. Build. Environ. 2019, 148, 372–383. [Google Scholar] [CrossRef]
  34. Cheng, X.; Yang, B.; Olofsson, T.; Liu, G.; Li, H. A pilot study of online non-invasive measuring technology based on video magnification to determine skin temperature. Build. Environ. 2017, 121, 1–10. [Google Scholar] [CrossRef]
  35. Cheng, X.; Yang, B.; Tan, K.; Isaksson, E.; Li, L.; Hedman, A.; Olofsson, T.; Li, H. A contactless measuring method of skin temperature based on the skin sensitivity index and deep learning. Appl. Sci. 2019, 9, 1375. [Google Scholar] [CrossRef]
  36. Jung, W.; Jazizadeh, F. Vision-based thermal comfort quantification for HVAC control. Build. Environ. 2018, 142, 513–523. [Google Scholar] [CrossRef]
  37. Wu, H.Y.; Rubinstein, M.; Shih, E.; Guttag, J.; Durand, F.; Freeman, W. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. (TOG) 2012, 31, 1–8. [Google Scholar] [CrossRef]
  38. Yang, B.; Cheng, X.; Dai, D.; Olofsson, T.; Li, H.; Meier, A. Real-time and contactless measurements of thermal discomfort based on human poses for energy efficient control of buildings. Build. Environ. 2019, 162, 106284.1–106284.10. [Google Scholar] [CrossRef]
  39. Meier, A.; Dyer, W.; Graham, C. Using human gestures to control a building’s heating and cooling System. In Proceedings of the International Conference on Energy Efficiency in Domestic Appliances and Lighting (EEDAL’17), Irvine, CA, USA, 13–15 September 2017; p. 627. [Google Scholar]
  40. Duan, W.; Wang, Y.; Li, J.; Zheng, Y.; Duan, P. Real-time surveillance-video-based personalized thermal comfort recognition. Energy Build. 2021, 244, 110989. [Google Scholar] [CrossRef]
  41. Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  42. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
  43. Plizzari, C.; Cannici, M.; Matteucci, M. Skeleton-based action recognition via spatial and temporal transformer networks. Comput. Vis. Image Underst. 2021, 208, 103219. [Google Scholar] [CrossRef]
  44. Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12026–12035. [Google Scholar]
  45. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  46. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  47. Bello, I.; Zoph, B.; Vaswani, A.; Shlens, J.; Le, Q.V. Attention augmented convolutional networks. In Proceedings of the the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3286–3295. [Google Scholar]
  48. He, S.; Liao, W.; Tavakoli, H.R.; Yang, M.; Rosenhahn, B.; Pugeault, N. Image captioning through image transformer. In Proceedings of the the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
  49. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 213–229. [Google Scholar]
  50. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 603–612. [Google Scholar]
  51. Lee, S.; Yu, Y.; Kim, G.; Breuel, T.; Kautz, J.; Song, Y. Parameter efficient multimodal transformers for video representation learning. arXiv 2020, arXiv:2012.04124. [Google Scholar]
  52. Van den Oord, A.; Kalchbrenner, N.; Espeholt, L.; Vinyals, O.; Graves, A.; Kavukcuoglu, K. Conditional image generation with pixelcnn decoders. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar] [CrossRef]
Figure 1. The envisioned framework for occupant-behavior-based HVAC control.
Figure 1. The envisioned framework for occupant-behavior-based HVAC control.
Sensors 24 01219 g001
Figure 2. The graph used in the ST-GCN model (left) and the bone graph used in the 2S-ACGN model (right).
Figure 2. The graph used in the ST-GCN model (left) and the bone graph used in the 2S-ACGN model (right).
Sensors 24 01219 g002
Figure 3. Illustration of a transformer-augmented adaptive graph convolutional layer.
Figure 3. Illustration of a transformer-augmented adaptive graph convolutional layer.
Sensors 24 01219 g003
Figure 4. Joint attention calculation based on the transformer.
Figure 4. Joint attention calculation based on the transformer.
Sensors 24 01219 g004
Figure 5. The architecture of the two-stream transformer-augmented adaptive graph convolutional network. ⨁ denotes the element-wise addition.
Figure 5. The architecture of the two-stream transformer-augmented adaptive graph convolutional network. ⨁ denotes the element-wise addition.
Sensors 24 01219 g005
Figure 6. Visualization of partial thermal adaptation behavior recognition results.
Figure 6. Visualization of partial thermal adaptation behavior recognition results.
Sensors 24 01219 g006
Table 1. Comparisons of the validation accuracy when adding adaptive graph convolutional blocks with or without A, B and C; wo/X indicates that the X module was deleted.
Table 1. Comparisons of the validation accuracy when adding adaptive graph convolutional blocks with or without A, B and C; wo/X indicates that the X module was deleted.
MethodAccuracy (%)
ST-GCN87.66%
TA-GCN wo/A79.29%
TA-GCN wo/B73.81%
TA-GCN wo/C85.79%
TA-GCN wo/T90.94%
TA-GCN93.96%
Table 2. Comparisons of the validation accuracy with different input modalities.
Table 2. Comparisons of the validation accuracy with different input modalities.
MethodAccuracy (%)
Js-TAGCN93.96%
Bs-TAGCN93.15%
2S-TAGCN96.56%
Table 3. Validation accuracy comparisons of 16 thermal adaptation actions.
Table 3. Validation accuracy comparisons of 16 thermal adaptation actions.
No.ActionST-GCN2S-TAGCN
1Fan self with an object93.75%81.25%
2Fan self with hands85.42%79.71%
3Fan self with one’s shirt89.58%87.50%
4Roll up sleeves95.83%93.75%
5Wipe perspiration70.21%72.74%
6Scratch head81.25%93.75%
7Take off a jacket98.67%99.65%
8Take off a hat/cap98.85%97.92%
9Sneeze/cough97.87%97.92%
10Stamp one’s feet85.42%98.96%
11Rub one’s hands89.36%85.11%
12Blow into one’s hands98.92%95.83%
13Cross one’s arms93.75%99.23%
14Narrow one’s shoulders71.28%93.62%
15Put on a jacket97.87%85.42%
16Put on a hat/cap98.74%97.87%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Duan, W.; Li, J.; Shen, D.; Duan, P. Thermal-Adaptation-Behavior-Based Thermal Sensation Evaluation Model with Surveillance Cameras. Sensors 2024, 24, 1219. https://doi.org/10.3390/s24041219

AMA Style

Wang Y, Duan W, Li J, Shen D, Duan P. Thermal-Adaptation-Behavior-Based Thermal Sensation Evaluation Model with Surveillance Cameras. Sensors. 2024; 24(4):1219. https://doi.org/10.3390/s24041219

Chicago/Turabian Style

Wang, Yu, Wenjun Duan, Junqing Li, Dongdong Shen, and Peiyong Duan. 2024. "Thermal-Adaptation-Behavior-Based Thermal Sensation Evaluation Model with Surveillance Cameras" Sensors 24, no. 4: 1219. https://doi.org/10.3390/s24041219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop