Next Article in Journal
Interdisciplinarity in Teacher Education: Evaluation of the Effectiveness of an Educational Innovation Project
Next Article in Special Issue
Residual Analysis of Predictive Modelling Data for Automated Fault Detection in Building’s Heating, Ventilation and Air Conditioning Systems
Previous Article in Journal
Carbon Footprint of Dwelling Construction in Romania and Spain. A Comparative Analysis with the OERCO2 Tool
Previous Article in Special Issue
Glazed Photovoltaic-thermal (PVT) Collectors for Domestic Hot Water Preparation in Multifamily Building
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Characterization of Indoor Environment in Smart Buildings: Modelling PMV Index Using Neural Network with One Hidden Layer

Department of Traction and Traffic Control, Faculty of Electrical and Computer Engineering, Cracow University of Technology, 31-155 Cracow, Poland
Sustainability 2020, 12(17), 6749; https://doi.org/10.3390/su12176749
Submission received: 15 July 2020 / Revised: 12 August 2020 / Accepted: 14 August 2020 / Published: 20 August 2020
(This article belongs to the Special Issue IEIE Buildings (Integration of Energy and Indoor Environment))

Abstract

:
Modelling of comfort with the use of neural networks in modern times has become extremely popular. In recent years, scientists have been using these methods because of their satisfactory accuracy. The article proposes a method of modelling feedforward neural networks, thanks to which it is possible to obtain the most efficient network with one hidden layer in terms of a given quality criterion. The article also presents the methodology for modelling a PMV index, on the basis of which it can be demonstrated whether the network will work properly not only on paper but in reality as well. The objective of this work is to develop a performance model allowing the effective improvement of all electrical and mechanical devices affecting the energy efficiency and indoor environment in smart buildings. To achieve this, several attributes of indoor environment are included, namely: air leakage as a connection to the outdoor environment, but also as uncontrolled component of energy, ventilation as delivery and distribution of fresh air in the building space, individual ventilation on demand indoor air quality (IAQ) in the dwelling or as a personal IAQ control, source control of pollutants in the building, thermal comfort, temperature, air movement and humidity control (humidity modifiers, i.e., buffers different from the air conditioning radiation from cold and hot surfaces bringing forward a question about the strategy of the process control. One may either develop a series of control models to be synthesized later or one can use one over-arching characteristic and use its components for operating the control system. The paper addresses the second strategy and uses the concept of PMV for a criterion of broadly defined thermal comfort (including ventilation and air quality).

1. Introduction

A team of collaborating authors in the US and Poland address the issue of energy efficiency of modern buildings. The scope of work includes preheating of ventilation air [1,2,3,4], multi-criterial analysis [5,6], the development of an integrated approach, called Environmental Quality Management [7,8,9] in the context of building physics [10,11], fuzzy logic [12] or artificial neural networks [13,14]. The greater focus on neural networks (NNs) is justified by the observation that currently used energy and hygrothermal models are parametric and may not be suitable for application in real-time building of automatic controls [14,15]. This statement is even more justified when considering their application to thermal comfort evaluation.
The evaluation of thermal comfort is so important because the indoor environments have become humans’ dominant habitat. The research [16] proves that more than 90% of our time is spent indoors. The productivity, health and well-being of building occupants depend on four indoor environmental quality aspects (IEQ) [17]: acoustics, indoor air quality, visual comfort [18,19], thermal comfort. Studies have shown that thermal comfort is the dominant aspect of IEQ and that it is related to occupant productivity [20].
Nowadays, the concept of comfort is determined by statistical evaluation of the degree of satisfaction with a specific factor, e.g., the thermal environment [21,22,23]. This concept depends on the individual psychological and physiological condition of a given person in the specified environment [24,25,26]. One’s psychological and physiological state depends on many factors [27] such as environmental or individual factors [27]. Environmental factors include, among others, measurable quantities such as: air temperature, air velocity, relative air humidity, radiation temperature, asymmetry of temperature distribution in a room [27]. The individual factors are metabolism, thermal insulation of clothing [27].
The concept described above is extremely important in the construction industry [28]. This industry consumes approximately 40% of energy [29,30] and as space heating or cooling uses about 50% of it, therefore a large amount of energy depends on how efficient the energy management system in a building is. It was estimated that efficient energy management systems in buildings can save up to 8% of the energy consumption [31].
In view of the indicated facts, in research environments, the subject of reducing energy consumption correlated with comfort issues is very popular [1,2,3,5,6,7,8,9,10,32,33,34]. It is worth mentioning, for example, the research presented in [34], which showed reduction in energy consumption when taking into account thermal comfort in controlling air conditioning temperature.
It should also be noted that studies of comfort are not limited to energy consumption, since they also have to reduce incidence rates and increase efficiency [4,11,35,36,37].

1.1. The Novelty and Purpose of the Work

The article deals with the present research gap identified by research societies and described in detail in [22]. The main novelty in the paper proposed by the author is a method (Section 2.4.3 and Section 2.6), which will find and examine the most important properties of structures of a feedforward neural network with one hidden layer in such a way that the PMV index is the best in terms of the selected evaluation criterion. In relation to the selected criterion, this method also identifies the best neural network among all examined.
The description of such a method for NNs with one hidden layer is the main purpose of the work. The presented method, with appropriate data preparation (Section 2.5.1), allows to fill the research gap noticed during the review of the world literature. What is worth noting in the publications dealing with the topic of filling the aforementioned research gap with the use of NNs [12,22,26,30,38,39,40,41] is that there is a tendency in which the results of a specific NN are presented. These articles do not mention whether a robustness study (Section 3.1) as well as an overfitting and underfitting study (Section 3.2) of the NN structure were performed. In the article, the author presents such a study and wishes to point out that when modelling comfort issues, omitting information about robustness studies or overfitting and underfitting studies may lead to incorrect conclusions. Hence, a broader description of these issues is included in Section 2.6.
The article presents the identification algorithm of the best neural network, which gives the author of the network information on how to find the optimal number of neurons for a specific network layer when modelling the PMV index. This algorithm is used for networks with one hidden layer, however, in the future it will be used for deep neural network (DNN) analyses as it is done for deep belief network (DBN) [42]. The author of the work, during the review of the world literature on the discussed issue, noticed that the identification of the structure was not commonly performed. In the author’s opinion, this should be done especially for NNs with one hidden layer [22,40,41] (see Section 4—Chance of choosing the wrong structure).
The research presented in the article is the first part of a broader look at the analyzed issue. The advantage of this part is the possibility of interpreting the influence of the network input arguments on the PMV index describing the individual thermal comfort thanks to the special form of the equation (Appendix D) recognized by the presented method (Section 2.4.3 and Section 2.6). This advantage derives from the fact that the models formulated using NNs with one hidden layer are much simpler than those based on DNN.
Another novelty is the presentation of problems that one should expect when modelling a PMV index if feedforward neural networks with one hidden layer are used for this purpose (Figure 1). The article also explains when this approach can be used. It is extremely important that the person who models a PMV index using NN does not make any basic mistakes, which at a later stage of the research will result in, for example, the lack of applicability of the designed network due to its sensitivity to changes of input data values (Section 2.6).

1.2. The Structure of the Paper

The article consists of five sections.
In Section 1, the general research issues and the novelty and purpose of the work are described.
Section 2 includes the research methodology and the data used.
In Section 3, the results of the research that allowed to identify the best NN were presented according to the methodology from Section 2.
In Section 4, the obtained results are discussed and compared with those obtained in other studies. The implications of the study results were also discussed.
In Section 5, the six most important conclusions of the presented research are included, along with a description of the future research program.

2. Materials and Methods

2.1. Personalized Thermal Comfort Models

Predicted mean vote (PMV) [43] and the comfort zone as defined by the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE55) [44] are the most popular methods of assessing thermal comfort. The first of these was presented by Fanger, P.O. [44] and then incorporated in the ISO7730 standard [22].
In addition, at the moment it is worth highlighting two main standards of adaptive models. The first is the already mentioned ASHRAE55 adaptive model [45], and the second is the European Standards 15,251 (EN15251) adaptive model [46]. It should be noted that the above-mentioned models are used quite widely in international standards, however, the need to improve their forecasting efficiency is noticeable [21], especially in the case of individual comfort [47,48].
This is because these models are not very flexible in terms of the comfort characteristics of individual people and cannot be updated according to them [21]. In addition, it is worth mentioning that they cannot currently be reformulated [49]. An important reason for this is that the main two models [45,46] are designed to predict the average level of comfort for a particular large representative group. As a consequence, they contradict the analysis of comfort issues taking into account the individual aspects of the user living in a given area and working in a specific environment. Instead, they aim at showing some significant generalizations.

The Present Research Gap, Literature Review

There is a wide variety of conducted studies on thermal comfort and building energy consumption with the use of artificial intelligence (AI). For example, in [50] an ANN and EnergyPlus were used to collect data of the Administration Building of Sao Paulo University to predict building energy consumption. In [51], an artificial neural network (ANN) was designed to simulate energy consumption according to different exterior wall materials. AI to predict building energy consumption is also presented in [52,53,54,55]. In [56], an ANN energy consumption prediction model for HVAC systems in office buildings was developed, but the model has fewer input parameters. In fact, a general description of how AI techniques may help for energy efficient HVAC system design was described in [57]. From the point of view of the use of AI for individual thermal comfort, it may be beneficial to familiarize oneself with [38,58,59], where neural networks are used to evaluate the issue and regulation in a building. In addition, AI techniques are currently associated with IoT [60] in the thermal comfort controlling system for buildings. A fully comprehensive literature review for artificial intelligence for efficient thermal comfort systems from April 2020 was conducted in [61].
In light of the above, there is a gap in comfort studies (described in more detail in [22]), which consists of the lack of flexibility of the models describing it. Therefore, there is a need to propose a comfort modelling method that will be adapted to specific data received in a specific region of the world for a specific group of people. Thanks to this, one can talk about a kind of individual approach to the topic, while taking into account a set of features representing a given community. The described gap was noticed some time ago, thanks to which there is currently a flood of publications on this subject [1,2,3,5,7,10,11,12,22,27,28,29,30,31,32,33,34,35,45,46,47]. It is worth quoting the work in [22], in which it is stated that the establishment of an individual thermal comfort evaluation is essential to achieving personalized thermal environment management.
Thanks to scientists from all over the world, the issue of comfort has been gradually developing. It may be interesting that the researchers are searching for solutions on the topic described above by using machine learning and deep learning [12,22,26,30,38,39,40]. In most cases, this trend is based on PMV index modelling with the use of feedforward neural networks.
For example, in one of the latest publications of this type [22] (2019), a system based on the Building Information Model (BIM) and artificial neural networks (ANN) is used to improve energy saving efficiency under the premise of increasing human comfort. This system consists of, among others, an ANN predictive model considering the PMV index.
Another position of this type is [26], where the authors use “Artificial Neural Network (ANN) due to its ability to approximate any nonlinear mapping.” In this position, the authors model the PMV index and state that “using ANN to train, we can get the input-output mapping of HVAC control system (…); we can propose a practical approach to identify thermal comfort of a building” [26].
Depending on the research team dealing with the described subject, neural networks of different levels of complexity are proposed. For example, in [38] a neural network (NN) thermal comfort evaluation model is proposed with only four environment variables as the input values. In this work, the model was based on the backward propagation algorithm, which ignores the differences of individual thermal sensation. On the other hand, [41] presents the use of six different algorithms, including those correlated with NN (CTree, GPC, GBM, kSVM, RF, and regLR) used to develop personal comfort models. This position already includes environment data and the behavior of users of the Personal Comfort System (PCS) act as input variables. It is also worth mentioning that the authors of this work use boxplots to present certain features such as prediction accuracy or variable combinations, which is a very good solution.
The use of boxplots as a tool for visualizing and assessing the accuracy of calculations correlated with comfort modelling using ANN can also be seen in [40]. The author of the publication proposes a complex ANN model for predicting thermal comfort, taking into account three variables of current climatic conditions, four indoor environmental variables, and two individual variables, building types, as well as a body variable. The essence of this paper is the fact that it demonstrates the high potential of using ANN in evaluating individual thermal comfort.
It is important to note that so far a quite popular network architecture in terms of applicability to the described issue has been the classic one [42], consisting of at most one hidden layer and one output layer [22,40,41]. However, recently, an approach using deep learning with different levels of network structure complexity has been used [26,30,38]. In view of the above, it is worth looking at these works, knowing at the same time that it is time to propose some tools that will help for better modelling of already used structures.

2.2. Classic PMV Thermal Comfort Evaluation Model

Studies on thermal comfort carried out over the last fifty years or more show that many factors influence it. Although the knowledge about these factors is currently recognized to a satisfactory degree [62,63,64,65], the formulation of a mathematical model involving all of them is an issue that has not yet been solved.
The first studies conducted on such a model have already yielded reliable results. These studies were conducted by Fanger, P.O. [43], who proposed an equation to estimate the average vote of a large group of persons on the thermal sensation scale. In this equation, Fanger proposed that the PMV index be described using six factors: air temperature, air velocity, clothing insulation, humidity, mean radiant temperature and metabolic rate.

2.3. Deep Learning or Classic Network Structure

In the world literature, there is a noticeable trend, in which along with modelling of the comfort index with the use of NN, a special case of a neural network with a specific structure [22,26,30,38,39,40,42] is presented. Usually, the authors of these works propose a network that functions properly under certain physical conditions. However, these conditions may not be met for another social group or building type. Therefore, the proposed networks are suitable for a fairly narrow group of cases in which they work with satisfactory performance. The important thing here is that the authors of the works dealing with the topic of comfort modelling use various learning techniques and methods to achieve the intended goal. Two dominant paths can be distinguished: the first using classic NN structures with one hidden layer [22,40,41] and the other using deep learning [26,30,38].
At this point, however, the question arises: why and in which case is it worth using the classic structure of a neural network with one hidden layer, and when to model the issue using deep learning methods? The answer to this question is quite clear and it results from the very functioning of neural networks.
While using classic neural networks with one hidden layer, one usually employs mapping methods that find the relationship between the network’s input arguments (“inputs”) and its reference output value (“target”). In this case, the network learning process is designed to find, for example, a function mapping that will best combine the mentioned input arguments with the output value.
This means that if the authors use neural networks with one hidden layer, then it is necessary for them to prepare and properly process the data. In this case, it is the author of the network model’s responsibility to properly select the learning data. During such selection, the author of the network should independently choose the learning data so that they fully characterize the most important features of the studied social group, taking into account the nature of the building and other conditions. In order for authors to do this, they are expected to have expert knowledge, without which it becomes practically impossible. Lack of this knowledge means that despite the application of network structure optimization, the models will be burdened with unacceptable errors, which is shown in Figure 1. In addition, poorly selected data in this case distort the process of mapping the phenomenon.
In view of the above, if the authors of the network use a classic structure with one hidden layer, then the correctness of the modelled features depends on their expert knowledge, since such a structure cannot identify them.
Unlike classic neural networks with one hidden layer, deep learning methods introduce more hidden layers to the network structure. The consequence of this is the extension of some kind of network awareness of learning data. This is due to the introduction of additional hidden layers that allow identification of common features occurring between the input arguments of the network. This identification occurs in the network learning process. Thanks to these identified features, the initial value is estimated.
The main difference between classic feedforward networks with one hidden layer and networks using deep learning is that the latter are able to identify the characteristics of the data assigned to the learning process, while the former cannot. In addition, due to the greater complexity of the networks’ structure and their expanded capabilities, when using deep learning methods, much greater expert knowledge in the field of modelling is required from the network designer. During the modelling process, the designer focuses on choosing structure, teaching technique and network validation methods so that the characteristics of the learning data are detected in the best possible way. Unlike the classic structures of neural networks with one hidden layer, using “deep” structures does not require expert knowledge related to the modelled object. This does not mean, however, that the knowledge is superfluous, because some awareness of phenomena correlated to the modelled object is necessary when choosing the number of hidden layers and the number of neurons in a given layer. This selection is extremely important, as it is responsible for the number of identified features and the quality of modelling.
In conclusion, in order to use feedforward neural networks with one hidden layer, expert knowledge associated with the modelled object and correlated phenomena is obligatory. It is used to properly select data so that they characterize the features of the object. On the other hand, while using “deep” networks, this expert knowledge is not required. Nevertheless, certain awareness of the functioning of the modelled object is necessary. This is due to the ability to identify the features of the object by using more hidden layers. Therefore, expert knowledge in this case is shifted towards issues related to neural networks, and not the modelled object itself or the phenomena correlated with it.

2.4. Data Processing, Network Structure, General Equation, Structure Identification Method

Proper preparation of training data is crucial for modelling using a neural network with one hidden layer. Due to their diverse specificity, this chapter will not describe how to select them. This is because each case for modelling the PMV index for a specific group of people can have completely different features. As already mentioned, the identification of these features requires expert knowledge, preferably combined with knowledge in the field of feature engineering [40].
However, there are some data processing techniques that, as confirmed by studies [66,67,68,69,70,71], increase the mapping quality of feedforward neural network with one hidden layer. These techniques include normalization methods, for example the “mapminmax” method that has satisfactory convergence necessary for the PMV index modelling. It should be noted that the use of the “mapminmax” method is not mandatory and that the effectiveness of normalization depends on the specific case of selected learning data.

2.4.1. Data Processing—The Mapminmax Method

Data normalization in artificial intelligence methods is mainly used to process data so that the influence of the value of the input arguments matrix of the X network is at the same maximum level [40]. Thanks to this procedure, the individual arguments of the input matrix x i can be treated as equivalent in terms of their impact on the output value of the network y . The use of such a technique involves the pre- and postprocessing of data, because if the data are transformed into another form, after being processed by the network, they must be reduced to values corresponding to the originals. The introduction of pre- and postprocessing is usually a deliberate procedure, because, as research shows [66,67,68,72], it allows obtaining better convergence of the results of the network (outputs) with the data obtained from the measurements (targets). However, one should bear in mind that it depends on the choice of activation function. If a sigmoid function is chosen as the activation function [42], then it is recommended to introduce normalization and denormalization of data. On the other hand, in case of selecting the ReLU function as the activation function, this is not required. It should be noted that the ReLU function is commonly used in “deep” networks rather than in those with one hidden layer described in the article.
Therefore, for modelling the PMV index, the author recommends using the “mapminmax” normalization method. The research results presented below take into account the use of this method.
The mapminmax function is a linear transformation into the interval of given boundaries [70]:
V a l = V a l o r g V a l m i n V a l m a x V a l m i n · V a l m a x V a l m i n + V a l m i n
where V a l o r g —original value, V a l —transformed value, V a l m a x and V a l m i n —original interval boundaries, V a l m a x , V a l m i n —desired range boundaries, here from −1 to 1.

2.4.2. The Network Structure and its General Equation

The structure of the feedforward neural network with one hidden layer is shown in Figure 2. In this figure, the blocks of pre- and postprocessing data are shown in green. In turn, the particular layers of the network were drawn against a yellow background. These layers were marked according to the recommendations in item [73]. Namely, the hidden layer is marked with the index {1}, while the output layer is marked with the index {2}. Activation functions were marked with the “activ” subscript. The network weights are designated as W neur_id , where neur_id represents designation of a neuron. A similar procedure was used for bias b neur_id .
The structure from Figure 2 represents Equation (2) in the general form:
y NN = denorm mapminmax ( f 2 activ ( W 2 · f 1 activ W 1 · norm mapminmax X + B 1 + b 1 2 ) )
where:
  • norm mapminmax —data preprocessing operation,
  • denorm mapminmax —data postprocessing operation,
  • X —input data vector,
  • W 1 —matrix of weights of input arguments for the hidden layer,
  • B 1 —column vector of biases for the hidden layer,
  • s 1 —number of neurons in a hidden layer,
  • f 1 activ a r g 1 —hidden layer activation function,
  • a r g 1 —argument of the hidden layer transfer function, described as:
    a r g 1 = W 1 · X + B 1
  • W 2 —vector of weights of input arguments for the output layer,
  • b 1 2 —bias for the output layer,
  • f 2 a c t i v a r g 2 —output layer activation function,
  • a r g 2 —argument of the output layer transfer function, described as:
    a r g 2 = W 2 · Y 1 + b 1 2
  • y NN —output value of the NN.
The matrices and vectors of Equation (2) are as follows:
X = x 1 x 2 x j 1 x j x j + 1 x N ,   B 1 = b 1 1 b 2 1 b p 1 1 b p 1 b p + 1 1 b s 1 W 1 = w 1 , 1 1 w 1 , 2 1 w 1 , 3 1 w 1 , t 1 1 w 1 , t 1 w 1 , t + 1 1 w 1 , 20 1 w 2 , 1 1 w 2 , 2 1 w 2 , 3 1 w 2 , t 1 1 w 2 , t 1 w 2 , t + 1 1 w 2 , 20 1 w p 1 , 1 1 w p 1 , 2 1 w p 1 , 3 1 w p 1 , t 1 1 w p 1 , t 1 w p 1 , t + 1 1 w p 1 , 20 1 w p , 1 1 w p , 1 2 w p , 1 3 w p , t 1 1 w p , t 1 w p , t + 1 1 w p , 20 1 w p + 1 , 1 1 w p + 2 , 1 1 w p + 3 , 1 1 w p + 1 , t 1 1 w p + 1 , t 1 w p + 1 , t + 1 1 w p + 1 , 20 1 w s 1 , 1 1 w s 1 , 2 1 w s 1 , 3 1 w s 1 , t 1 1 w s 1 , t 1 w s 1 , t + 1 1 w s 1 , 20 1 W 2 = w 1 , 1 2 w 1 , 2 2 w 1 , j 1 2 w 1 , j 2 w 1 , j + 1 2 w 1 , s 1 2

2.4.3. The Method Identifying Structures with the Best Number of Neurons in a Hidden Layer and the Best Neural Network

The complexity of the structure of a neural network with one hidden layer in the analyzed case depends on the number of neurons in the hidden layer. The parameter that characterizes this complexity is the value of s 1 . It determines the size of the matrices W 1 , B 1 and W 2 and it characterizes the modelled phenomenon together with the data assigned to the learning process.
The essence of proper selection of s 1 is that its value is neither too small nor too high. It turns out that incorrect selection of the value of s 1 leads to undesirable phenomena such as overfitting or underfitting [70], and others [42].
Therefore, it is recommended to choose the number s 1 so that it meets a certain assumed criterion of the quality of the mapping. For this purpose, the author recommends the use of the algorithm shown in Figure 3. After completing the procedure presented in this figure, it is necessary to assess whether the selected structure is characterized by repeatability of the obtained results and whether the impact of overfitting or underfitting is negligible. An example of such assessment will be presented in further sections of the article.
In the algorithm, the range of s 1 from 1 to 50 was selected due to the specificity of the data for NN. This range was chosen in order to show that above a certain number of neurons in the hidden layer, the network is not suitable for use.
The algorithm shown in Figure 3 aims to identify the best possible complexity of the neural network structure in terms of the chosen evaluation criterion. This algorithm also enables indication of the best network obtained for this criterion. It consists of two procedures: P1 and P2. It consists, in crude terms, in training all possible structures of the feedforward two-layer neural network in sequence, performing their validation and testing the correctness of their functioning. The aforementioned identification of the best possible dimension of the matrix was associated with checking fifty different network structures differing in number of neurons s 1 in the hidden layer. In the algorithm, this number increases by 1 in the range of 1 to 50. Due to the fact that in the first iteration of the network learning process, the initial values of weights and bias in neurons are assigned randomly, calculations for each of the examined structures with a specific number of neurons in the hidden layer are repeated several times. In the algorithm, each such calculation is indexed as a p p r o a c h and numbered consecutively from 1 to 10. The described repetitions of the learning process of the same structures indexed a p p r o a c h are necessary, because thanks to them the probability of obtaining a minimum global performance function increases [66,74]. On the other hand, lack of such a procedure may lead to stopping of the learning process in the place of a minimum local performance function occurrence, which would lead to erroneous conclusions regarding the usefulness of a particular network structure. Repeating the complete learning process several times, including network training, validation and testing, has another advantage. This procedure enables the study of robustness of the considered neural network structures (Section 2.6.1 and Section 3.1). The author recommends that this test be carried out on the basis of specially created boxplots [75], as shown in the case of [39] and as it will be shown in Section 3.1.
As already indicated, the described algorithm identifies the best possible structure in terms of a certain quality criterion. This identification takes place in the block: Are the currently obtained results better than “The best results so far”?
This criterion should be selected depending on what is expected from the designed neural network. Usually, expectations come down to the choice between two criteria:
1. When the network designer wants to achieve the best possible quality of function mapping to the data assigned to the learning process and to independent data that may occur when the network is used. In this case, it is recommended to select the criterion of the minimum value of Maximum Absolute Relative Error [76] calculated for the network testing stage (2) obtained for the given a p p r o a c h (Equation (5)):
M a i n C r i t = min M a x A R E TEST s 1 , a p p r o a c h
where:
  • M a i n C r i t —main criterion for choosing the best neural network structure,
  • s 1 —number of neurons in the hidden layer,
  • M a x A R E TEST —Maximum Absolute Relative Error obtained for the testing stage:
    M a x A R E TEST = m a x y i Test y NN i Tes t y i Test
    where:
    • y i Tes t —target for the network in testing stage,
    • y NN i Test —output for the network in testing stage.
2. When the network designer cares mostly about the speed of the network, and then about the quality of its function mapping to the data assigned to the learning process and independent data, this may occur when the network is used. In this case, the decisive factor is the minimum network complexity that ensures satisfactory results in terms of compliance of the network results (outputs) with its target results (targets). This factor is dominant because the speed of network operation depends mainly on three conditions: computing performance of the calculating machine, precision of significant numbers depending on the data format, and the computational complexity depending on the size of the network structure. Due to the fact that the network designer usually has no influence on the first two conditions, the speed of the network is determined by the size of the hidden layer. The requirements given in point 2 can be met if one chooses the following identification criterion:
M a i n C r i t = min M a x A R E TEST s m i n 1 , a p p r o a c h
where:
  • M a i n C r i t —main criterion for choosing the best neural network structure,
  • s m i n 1 —the smallest number of neurons in the hidden layer,
  • M a x A R E TEST —maximum absolute relative error obtained for the testing stage.
The criteria listed above are examples and may change depending on the modelled phenomenon. The decision to propose a minimum value of Maximum Absolute Relative Error (2) results from the specifics of this indicator. The indicator for the absolute value of the network output ( y NN i |) guarantees that each of them will be in the range of Absolute Relative Error from zero to maximum absolute Relative Error [77,78]. Consequently, it gives information about the absolute value of the maximum possible error of the mathematical model [79,80,81] represented by the already taught neural network. This criterion only includes data for the testing stage. This is a deliberate procedure, because if the taught neural network commits a Minimum Relative Error in a value for data that have never been taken into account at the stage of network training and its validation, then its results should be the least different from the actual values. It should also be noted that the results obtained from the testing stage assess the likelihood of occurrence of the phenomenon of “overfitting”, which becomes smaller the smaller the difference in network response ( y A N N i T e s t ) relative to the target values of y i T e s t .
Analyzing the criteria (14) and (16), one can see that in the mathematical description the main difference between them is the replacement of s 1 by s m i n 1 . This procedure has significant effects in the form of transferring the main emphasis of calculations from aiming at achieving the best quality function mapping to the speed of the neural network.
It is worth noting that meeting the criteria (14) or (16) initially identifies the best network structure. However, the final assessment of the applicability of the network structure should only take into account those structures that:
  • show robustness to changes of initial values of weights and bias in network neurons,
  • are characterized by negligible impact of overfitting or underfitting.
Condition I concerns the network’s robustness to the initial values of weights and bias and it is introduced because of the desire to indicate the features of the mathematical model of the neural network that ensure the highest possible repeatability of the results of the presented research issue [14,78].
Condition II shows whether the network has memorized the data assigned to the learning process or whether it is not too simple.
The check of both these conditions will be presented later in the article with the participation of measurement data. Here, however, it should be emphasized that from a mathematical point of view, meeting the criteria (14) or (16) is a necessary condition. However, the fulfilment of Condition I and II is a necessary and sufficient condition leading to full network applicability.

2.5. Data for NN and Chosen Learning Specification

This subsection presents a description of chosen examples of learning data and the selection of parameters characterizing the size and structure of feedforward networks with one hidden layer. The subsection is the final part of the paper that describes the theoretical stage of PMV index modelling and begins the practical stage, in which the assessment of modelling quality and the possibility of using the taught neural network is presented. The calculations were made using the MATLAB environment version 9.8.0.1417392 (R2020a), update 4, license number: 214849, with Deep Learning Toolbox, Version 14.0, (R2020a).

2.5.1. Data for NN

Measurement data titled “Langevin Data Legend” by Jared Langevin ([email protected]) from Drexel University, Department of Civil, Architectural, Environmental Engineering was used to model the PMV index. The source of the data was [82]. The choice of data for the example was related to the fact that on their basis a case study on tracking human-building interaction described in [23] had already been prepared. The research object was a medium-size office with an area of approximately 58,000 square feet with a variable air volume (VAV) system, operable windows and adjustable thermostats, located in Philadelphia, PA (Center City). The building was described in detail in [23], which also contains a description of the measurements, their results and analyses. In [23], there are also graphs and histograms describing data distributions and their specificity. The data used for the NN was collected in the period between July 2012–July 2013. These data were registered at fifteen-minute intervals on local thermal conditions, related behaviors, and comfort of twenty-four occupants. The summary of the data was created on 20 July 2015 and contains a total of 840,984 measurement samples along with the results of their analyses. The data cover a total of seven categories of results:
  • general,
  • environment,
  • personal characteristics,
  • comfort/productivity/satisfaction,
  • behaviour,
  • personal values,
  • model.
The total number of parameters included in the aforementioned categories is 118, and their detailed descriptions can be found in [82]. From the 118 parameters mentioned above, those representative for PMV index modelling were selected. The choice of these parameters was associated with the specificity of data and the scientific experience of various researchers, which were described in [22,28,38]. Finally, for the needs of modelling, 20 parameters were selected, which are arguments for the PMV index model for the said building. These arguments were written in the X matrix (Equation (4)). The elements of this matrix, along with a description of the parameters, are given in Table 1. The data described also contained the PMV index for this building [23,82]. The values of this index were the reference output values of the y i network (Table 2).
The histogram of all input data and reference output values (targets) assigned to the learning process with the division into learning stages is shown in Appendix AFigure A1 and Figure A2.
In order to make an exemplary PMV index model from the data from [82], a vector of the first 30,000 sets of samples was taken. Then, it was checked that all the elements listed in Table 1 and Table 2 in these sets are complete and are different than “not a number” (NaN). Thanks to this, a vector of sample sets with a length of 11,309 was obtained, in which each element was complete and was not NaN. Afterwards, from the said vector of 11,309 sample sets, identification of samples generating possible model noises was performed. The number of such sets of samples equalled 663. As a result, a vector with a length of 10,646 sets of samples was selected to model the PMV index. This vector is divided into two matrices:
  • matrix X (inputs) with dimensions of 10,646 × 20, which is a series of input sets of samples assigned to the network learning process,
  • the matrix Y (targets) with dimensions of 10,646 × 1, which is a series of PMV index reference output values obtained from the data included in the matrix X.
Matrices X (inputs) and Y (targets) were used to teach 50 neural network structures (500 NNs) according to the algorithm shown in Figure 3. However, in order to do this, the elements of the sample sets X i , y i were divided into 3 parts (training, validation, tests). The first part, with 60% of the data sets, was assigned to the network training stage X i Tr , y i Tr . The second and third part, with 20% of data sets each, were assigned respectively to the validation stage X i Val , y i Val and network tests stage X i Test , y i Test . Examplary effects of assigning the data sets to a specific learning stage are shown in Figure 4.
The algorithm for assigning data sets was saved as a loop that resets the iteration step after the assignment of the data set to the testing stage. This algorithm assigned data sets as follows: the first three data sets from a series of sets were assigned to the network training stage, the fourth set was assigned to the validation stage, the fifth set was assigned to the testing stage. Then, the abovementioned loop iteration reset took place. The process of assigning data to a specific learning stage was completed after assigning all the prepared 10,646 data sets.
Data prepared in this way (thanks to the invariable and manual assignment of data sets to specific learning stages) enabled limiting the randomness of neural network results for a given a p p r o a c h .

2.5.2. Chosen Learning Specification

The specification presented in this chapter should be treated as selected for the data described in the previous chapter. The choice of such specification depends on the nature of the data assigned to the learning process. The author encourages specialists who model the PMV index using the method described in this article to select learning parameters based on the knowledge of the data assigned to the learning process. The learning parameters and activation functions presented below are characterized by satisfactory convergence of network results with reference data for the tested object. These parameters were selected on the basis of experiments carried out by the author. The chosen results on the basis of which the parameters were selected are presented in Appendix B. This selection can be made using the optimization of hyperparameters, e.g., Random Search, Grid Search, Bayesian optimization. The selected specification of learning and the networks are presented below in a way which enables further description of the selection and evaluation of the network for the method described in this article.
The learning parameter values given for the network learning process are presented in Table 3.
The research was carried out using Levenberg-Marquardt training algorithm [83]. This algorithm was proposed because it has shown satisfactory performance in preliminary calculations. As a performance function the mean squared error ( M S E ) was chosen (8):
M S E = i = 1 n y i y NN i 2 n = i = 1 n e i 2 n
where:
  • n —number of sets for each learning stage: training X i Tr , y i Tr , validation X i Val , y i Val , tests X i Test , y i Test ,
  • y i —target for the network-reference PMV index value,
  • y NN i —output of the network respective to the i-th target, which corresponds to estimated PMV index value.
Implicitly, the Error was also defined Equation (9) as:
e i = y i y NN i
As an activation function in the hidden layer, according to [42,66,74], a hyperbolic tangent sigmoid function (Equation (10)) was implemented. In the hidden layer, according to the recommendations from [74], a linear activation function-purelin (Equation (11)) was implemented. The selection of the activation function was made for the recommendations of the non-linear function analysis [66] because this type of phenomena include the modelled issue.
The hyperbolic tangent sigmoid function was chosen because its output values, unlike the Sigmoid and ReLU functions, are in the range of positive and negative values. As a result, smaller network complexity in the hidden layer can be expected [42,66,74]. In addition, this function allows both continuous and discrete changes of input data to be taken into account [70], which is done in the present case. What is more, thanks to the choice of the linear activation function-purelin in the output layer, it is possible to evaluate the quality of the model fit (coefficient of determination R 2 ), as well as regression analysis with the Pearson coefficient ( R ) [74].
f 1 a c t i v a r g 1 = 2 1 + e 2 · a r g 1 1
f 2 a c t i v a r g 2 = a · a r g 2
where:
a = 1 -directional coefficient.

2.6. Assessment of the Applicability of the Network Structure

2.6.1. Robustness Study Methodology

According to the algorithm described in Section 2.4.3, the research involved checking 50 structures of neural networks differing in the number of neurons in the hidden layer. Each of these structures was trained, validated and tested ten times. Each such process is called a p p r o a c h .
The repetition of the examination of a given network structure ten times is caused by the deliberate assignment of initial values of weights and bias at the training stage in a random manner because such a procedure allows robustness analysis of the network structure. This procedure is particularly important from the point of view of work stability and repeatability of the use of the neural network (see Condition I, Section 2.4.3). This is due to the fact that if a given network structure changes the initial conditions in weights and bias values, it shows similar results of network quality indicators, e.g., M A R E (Mean Absolute Relative Error), R (Pearson’s coefficient) [83] and others; then it can be stated that such a structure is insensitive or not very sensitive to the initial conditions. This study is similar to a system stability examination, which, in crude terms, is to check whether a change in the initial state of its work has an impact on its results. Therefore, if the network structure shows a satisfactory insensitivity to changes in initial values of weights and bias, then it can be considered stable in terms of the studied issue [70]. A similar situation occurs in the case of repeatability of the use of the neural network structure, because if it shows satisfactory insensitivity, and its quality indicators (e.g., M A R E , R , M S E or other) along with its results are satisfactory in terms of the correctness of the whole system operation, then this repeatability occurs.
The examination of robustness of neural network structures, according to position [70], is carried out for the network validation stage. As mentioned above, for the purposes of this study, analyses should be performed using certain indicators, e.g., R (Pearson’s correlation coefficient) [84,85], R 2 (coefficient of determination) [74], M S E (Equation (8)), M A E (Equation (12)), S A E (Equation (13)), S S E (Equation (14)), M a x A R E (Equation (15)), M A R E (Equation (16)).
M A E = 1 n ·       i = 1 n y i y NN i
S A E = i = 1 n y i y NN i
S S E = i = 1 n y i y NN i 2
M a x A R E = m a x y i y NN i y i   or   M a x A R E % = M a x A R E · 100 %
M A R E = 1 n   · i = 1 n y i y NN i y i = 1 n   · i = 1 n e i y i
The decisive factor in choosing indicators is usually the order of magnitude of reference output quantities assigned to the learning process (targets). However, there are sometimes cases [86] in which, in addition to the order of magnitude, the nature and variability of targets are taken into account.
In general, the proposed rule of choosing indicators is to check the order of magnitude of targets and to use the following conditional function:
M a x A R E , S A E   or   other if   y i max 1 M a x A R E ,   S A E ,   M A E   o r   M A R E   or   other if   y i max 10 M a x A R E ,     S S E ,   M S E   or   other if   y i max > 10
where:
y i max —maximum absolute target value obtained for all data assigned to the learning process.
This principle is based on the following logic of choice:
If the maximum absolute target value y i max is:
  • less than 1, then one should use the indicator that contains the Sum of Absolute Errors made by the network;
  • less than 10, then one should select the indicator that contains the average of the Sum of Absolute Errors made by the network. In this case, it is recommended to check how the structure behaves in terms of the selected indicator from the case y i max 1 ;
  • greater than 10, then one needs to choose an indicator that amplifies network Errors by using the exponentiation operation.
It is worth noting that in Equation (17) M a x A R E is considered in each case. This is because this indicator informs whether a given structure is characterized by local minima for data assigned to the learning process.
Therefore, since the absolute values of the PMV index are less than or equal to 3, the case of y i max 10 should be used for robustness analysis. The results of this analysis for data from Section 2.5.1. are presented in Section 3.1.

2.6.2. Methodology of Overfitting and Underfitting Study

After performing the robustness analysis, one should conduct a compliance study of function mapping for data that were not considered in the network training or validation process. This study is carried out for the testing stage and it is aimed at checking whether a specific structure of the neural network, as well as a given a p p r o a c h , models the phenomenon in accordance with reality, or whether overfitting or underfitting take place [42]. The occurrence of overfitting means that the network memorizes data, thus losing its ability to correctly interpret the data, except for those assigned to the network training stage. In turn, the occurrence of the underfitting phenomenon informs that the network model is too simplified for the analyzed issue.
To verify whether there is overfitting or underfitting for a given structure or a specific a p p r o a c h , just like in the case of robustness study, one should check the order of magnitude of y i (targets). Then, based on this verification, indicators informing about the occurrence of these phenomena are selected. Generally, the proposed rule for selecting indicators is analogous to the robustness study (Equation (18)). Essentially, it differs only in the fact that M a x A R E does not work in overfitting and underfitting analysis, and that these indicators are calculated with the use of data that are not involved in the training or network validation stages.
S A E   or   other if   y i max 1 S A E ,   M A E   or   M A R E   and   other if   y i max 10 S S E ,   M S E   or   other if   y i max > 10
Therefore, since the absolute values of the PMV index are less than or equal to 3, the case y i max 10 should be used for overfitting and underfitting. The results of this analysis for the data from Section 2.5.1 are presented in Section 3.2.

3. Results

The results shown in this chapter include checking 500 neural networks. The purpose of this check was to find the best network structure with one hidden layer and to select the best network for modelling the PMV index. This goal was achieved for Criterion 1 (Equation (5)). An identical analysis can be performed for Criterion 2 (Equation (7)). This chapter presents the complete network assessment procedure for its applicability (see Section 2.4.3, Conditions I and II and Section 2.6). In Section 3.1, the assessment of Condition I is described, while the assessment of Condition II is covered by Section 3.2.

3.1. Robustness Study of the Examined Neural Network Structures

Figure 5 presents the M a x A R E values (24) of the considered neural network structures for the studied issue. Due to the robustness test, this figure was presented in the form of a boxplot [75]. The number s 1 (the number of neurons in the hidden layer) was drawn on the abscissa axis, while M a x A R E values obtained from individual approaches were shown on the ordinate axis.
In Figure 5, it can be seen that the structures with s 1 3 or s 1 37 are characterized by their sensitivity to changes in the initial values of weights and bias, so they will not be suitable for use.
Due to the high (compared to other) values of robustness assessment indicators obtained for structures with s 1 3 and s 1 37 , these structures will be omitted in further analysis in order to present the results in a clearer manner.
Figure 6 and Figure 7 present the results of the S A E and M A E indicators, respectively. In these figures, similarly as before, the number s 1 was drawn on the abscissa axis.
Based on the results obtained from the boxplot (Figure 6 and Figure 7), it can be stated that the structures for s 1 = 5 ,   10 ,   11 ,   16 ,   19 ,   21 23 ,   26 ,   36 show a satisfactory insensitivity to the influence of initial conditions assigned during the learning process. As can be seen, the indicated structures are characterized by a narrow interquartile range (IQR) [87,88] compared to other IQR ranges presented in the figures. The remaining structures are sensitive to initial conditions and have too-wide IQR [87,88] or too-large discrepancies in the results with respect to the median [87,88]. It is especially visible for the structures with s 1 = 4 ,   6 ,   7 ,   8 ,   9 ,   15 ,   17 ,   18 ,   24 ,   27 ,   29 35 from Figure 6 and s 1 = 4 ,   7 ,   8 ,   9 ,   15 ,   17 ,   24 ,   27 ,   33 34 from Figure 7.

3.2. Study of Overfitting and Underfitting of the Examined Neural Network Structures

Figure 8 shows a boxplot illustrating the S A E results (22) obtained for the neural networks testing stage. These results were ranked according to the complexity of the network architecture from the one with 4 neurons in the hidden layer ( s 1 = 4 ) to the one with the largest value analyzed ( s 1 = 50 ) . Structures with s 1 3 are not drawn in this figure due to the significant impact of the underfitting phenomenon for these structures. This means that these structures cannot be used for the analyzed data. Therefore, the analysis of the issue in question will be considered without these structures to allow for a clearer demonstration of results. As proof of this fact, in Appendix C there is a boxplot (Figure A5) illustrating the S A E results (22) obtained for the neural networks testing stage for the range 1 s 1 50 .
Based on the results presented in Figure 8, it can be seen that the structures with s 1 41 have a much wider IQR [87,88] than other IQR ranges. This means that the influence of the overfitting phenomenon for the above-mentioned structures is significant, so they cannot be used. The following figures for the present study were drawn without the involvement of these structures, in order to improve the visibility of the results.
Figure 9 and Figure 10 present, respectively, the S A E (Equation (13)) and M A E (Equation (12)) results obtained for the neural network testing stage without taking into account the structures with s 1 3 and s 1 41 .
These figures show that the structures affected by overfitting are those with s 1 = 4 ,   6 ,   7 ,   8 ,   9 ,   15 ,   17 ,   18 ,   24 ,   27 ,   29 ,   30 ,   33 ,   34 or s 1 37 . Therefore, these structures will not be taken into account in the identification of the best possible structure according to the criterion described by Equation (5).
A much broader IQR can be observed for the indicated structures [87,88] than other IQR ranges. There are also large discrepancies between the maximum and minimum values of S A E and M A E , exceeding the first and third quartile. It means that in such a case we are dealing with the phenomenon of overfitting. If the aforementioned maximum and minimum values were not so differentiated, it would characterize the phenomenon of underfitting.
In conclusion, it is stated that structures characterized by the impact of overfitting and underfitting at a satisfactory level [87] are the ones with s 1 = 5 ,   10 14 ,   16 ,   19 23 ,   25 ,   26 ,   28 ,   31 ,   32 ,   35 ,   36 . These structures will be taken into account in the identification of the best network structure according to the criterion described by Equation (5).

3.3. Identification of the Best Network Structure and the Best

The results obtained for robustness study (Section 3.1) as well as overfitting and underfitting study (Section 3.2) lead to the conclusion that, among the examined structures, the ones with s 1 = 5 ,   10 , 11 ,   16 ,   19 ,   21 23 ,   26 ,   36 and their a p p r o a c h e s with sufficient accuracy (Section 3.2) and stability (Section 3.1) model the PMV index.
This statement results from the conjunction of two points:
  • the fact described in Section 3.1 that the structures with s 1 = 5 ,   10 ,   11 ,   16 ,   19 ,   21 23 ,   26 ,   36 are sufficiently insensitive to changes in the initial weights and bias of the neural networks for the network training stage;
  • the fact described in Section 3.2 that the impact of underfitting or overfitting is acceptable or has negligible significance in the case of the structure for s 1 = 5 ,   10 14 ,   16 ,   19 23 ,   25 ,   26 ,   28 ,   31 ,   32 ,   35 ,   36 .
In view of the above, the identification of the best possible network structure and the best a p p r o a c h for the selection criterion described by Equation (5) was made for structures with s 1 = 5 ,   10 ,   11 ,   16 ,   19 ,   21 23 ,   26 ,   36 .
Figure 11 presents a boxplot of M a x A R E TEST results (Equation (5)) obtained for the selected structures. It shows that the smallest values M a x A R E TEST were obtained for the structure with s 1 = 5 , therefore this structure was indicated as the best for the analyzed selection criterion. This result was also confirmed by its comparison with the results of the other networks in Figure 11.
Table 4 shows the detailed values of M a x A R E TEST obtained for structures with s 1 = 5 . The best a p p r o a c h identified that meets the main criterion for selecting the best structure and network (Equation (5)) is marked in bold in the table.
The values in Table 4 show that the network meeting the main selection criterion (Equation (5)) for the data assigned to the test stage in the worst case commits a Relative Error smaller than or equal to 1.8%. The result obtained thanks to the procedure presented in the article can be considered very good. However, one should bear in mind that the received M a x A R E TEST % 1.8   % does not apply to the full range of data assigned to the learning process. Therefore, when modelling the PMV index, one should check the results for the full range of data, as shown in the next section.

The Best Identified Neural Network and PMV Index Mathematical Model

The learning process of the identified best neural network (Equation (5)) is presented in Figure 12 and Figure 13. These figures show in sequence the calculated values of the network performance function for individual learning stages (Equation (8)) and the network learning parameters. They were drawn for the network learning epochs.
From Figure 12 one can deduce that the neural network obtained the best results for the 356th learning epoch. Therefore, the results obtained for this epoch are the end result of the learning process of the network. It can be seen in the figure that no significant improvement in the value of the performance function M S E (Equation (8)) is observed for the network training stage above the 150th epoch. However, it is noticeable in this scope for the stage of network validation and testing. This case suggests that overfitting or underfitting does not occur or is negligible.
The “validation checks” chart in Figure 13 shows that the learning process was carried out without frequent occurrences of increases in the value of the network performance function M S E for the validation stage. As a result, usually a zero value is seen on the chart. The occurrence of non-zero values with an upward trend in this chart means that the data assigned to the learning process are characterized in the case of the analyzed structure by the local minima of network performance functions. Apparently, this is not the case for the best structure identified. This case is, among others, a consequence of the robustness test (Section 3.1).
In Figure 13, it can be seen that during the learning process, the Gradient from around epoch 50 was of the order of magnitude 10−5. This means that slight changes in weights and bias during the training stage improved the learning process for the validation stage. This phenomenon can be observed in Figure 12 as well. Figure 13 also shows changes in momentum (Mu) for each successive learning epoch. The figure shows that the value of momentum decreased with the increase of the learning epochs’ numbers, which means that the learning process was carried out correctly [66].
Figure 14 presents a histogram of Errors made by the network e i (Equation (9)), calculated for data assigned to a particular stage of the network learning process (training, validation, test). In this figure on the abscissa axis, the values of network Errors e i assigned for a given bin were presented. The ordinate axis indicates the number of instances e i covering the range of a given bin. This figure was created in accordance with the guidelines given in [74]. In the discussed figure, it can be seen that the data were assigned to a specific learning stage in a regular manner (Section 2.5.1). The figure shows that the vast majority of the data were characterized by Error around 0.0015. From the point of view of the quality of the modelled phenomenon, this figure is purely illustrative since the values of e i (Errors) are not related to y i (targets).
Therefore, Figure 15 presents a histogram of Relative Errors R E calculated in accordance with Equation (19). The ordinate axis indicates the number of R E occurrences covering the range of a given bin. Figure 15 was drawn for 40 bins.
R E = y i y A N N i y i
Looking at Figure 14 and Figure 15, one can state that both are characterized by the Gaussian distribution. In the first case, this distribution is right-sided, while in the second case it is left-sided. This situation may occur if the range of data assigned to the learning process includes both negative and positive numbers, i.e., it can occur in PMV index modelling. It is worth noting that on the basis of the results from Figure 15, similarly to Figure 14, it can be stated that the data for a particular learning stage have been assigned regularly (Section 2.5.1). From the results obtained in Figure 15, it can be stated that the obtained Relative Error for most of the data was around −0.001 and that it fluctuated within this number at the value of ±0.002.
Figure 16 presents a chart illustrating for which of the data samples assigned to NN the Relative Error (Equation (19)) occurred. This figure shows that the occurrence nature of R E was stochastic. Particular attention in this figure should be given to two samples for which R E was obtained in the range 0.1 < R E < 0.15 . This suggests that the neural network for the full range of data assigned to the learning process models the PMV index with M a x A R E   % < 15 % . It proves that real measurement data were used to model the PMV index, for which situations in some cases the model needs more than 15 min (sampling time of the measurement data) to adjust. An example of such a situation may be the presence of more people in the building, e.g., large employee meetings or arrival of a school trip. It is worth noting that in each of these two cases after its occurrence, the PMV index model stabilizes with M a x A R E   % < 4 % . The occurrence of two samples for which M a x A R E   % > 4 % is marginal and represents 0.0188% of the data assigned to the learning process. Therefore, from the point of view of the described NN model for the PMV index, the impact of samples generating errors M a x A R E   % > 4 % can be neglected, and the model itself can be treated as one that is characterized by M a x A R E   % < 4 % (precisely: M a x A R E   % < 3.73 % ).
In addition to the analysis of Errors and Relative Errors when assessing the possibility of using a neural network, one ought to perform R (Pearson’s correlation coefficient) regression calculations [74], based on which the quality of model fit to the data assigned to the network learning process is assessed. An indicator of this quality is the coefficient of determination R 2 [74].
Figure 17 presents the results of R regression calculations obtained for the discussed network. It shows four charts: Training, Validation, Testing, All data. The first three mentioned relate to the results of calculations received for data assigned to a specific stage of network learning, while the fourth presents the results obtained for all data assigned to the learning process. The described charts were plotted in relation of y NN i (output) to y i (target). In this figure, it can be seen that R for all stages of the learning process was almost equal to 1. It can also be noticed that at the training stage there were two samples to which the network did not adjust with the same accuracy as for the other samples. These two samples were highlighted in the description of Figure 16.
Interpreting the results of R regression calculations [74], it can be stated that the output values of y NN i are very strongly correlated with targets y i . In the case of the validation and testing stages, the correlation between these values is almost perfect [74].
Based on the results of R regression (Figure 17), the values of the coefficient of determination R 2 were calculated. These values are shown in Table 5. A conclusion can be drawn from them that the quality of the PMV index model fit to the data assigned to the learning process, as well as to all its stages, is at a very good level. It is noteworthy that in a situation where there is a perfect match quality ( R 2 = 1), in the worst case shown in the table below, R 2 values smaller than 0.00002 are missing. This is proof of well-conducted experiments, proper selection of network architecture and correct selection of the learning method.
In order to present the results of the described mapping quality, Figure 18 shows an example of y NN i (outputs) network response on the background of y i (targets) drawn for the arguments x 1 and x 2 (Table 1). In this figure, it can be seen that y NN i (outputs) response values largely overlap with y i (targets). It can also be noticed that the differences between y NN i and y i network responses are marginal, which confirms the results described in this section.
The results described in this section relate to the identified PMV index mathematical model, described by Equation (20). This equation is a special form of Equation (2). The difference between them is that in Equation (20):
  • the dimensions of the matrix are specified; therefore, this information was noted in their subscripts;
  • the elements of the matrix are identified numerical values.
    y = d e n o r m m a p m i n m a x ( f 2 a c t i v ( W 1 × 5 2 · f 1 a c t i v W 5   ×   20 1 · n o r m m a p m i n m a x X + B 5   ×   1 1 + b 1 2 ) )
Details of Equation (20) are included in Appendix D (Equation (A1)).

4. Discussion

The research and procedures described in the article show the full process of modelling and testing of PMV index using feedforward neural networks with one hidden layer. The paper describes when neural networks with one hidden layer can be used to model PMV index, and when deep learning approach should be used.
The article presents the procedure for identifying the best structure and the best neural network, which is the mathematical model of the PMV index in terms of the selected evaluation criterion (Section 2.4.3). It also indicates the situations determining the choice of this criterion.
Afterwards, an example of a study on identifying the best structure and best network was presented along with its detailed description. This example was shown in the most general form possible so that it could take the form of a tutorial. For the example mentioned, data from the previously examined real object (Section 2.5.1) were used (see case studies in [23,82]).
The procedure for identifying the best possible network structure is divided into three parts (Section 3.1, Section 3.2, Section 3.3). The first two parts: “robustness study…” (Section 3.1) and “overfitting and underfitting study…” (Section 3.2) were designed to check which structures are suitable for use. On the other hand, the third part (Section 3.3) describes the selection of the best structure and network in terms of the best compatibility of matching the PMV index model to its real equivalent (Section 2.4.3, Equation (5)). At the end of the modelling process, the best identified neural network was presented along with its results.
The PMV modelling procedure presented in this article enables filling the gap identified by the scientific community in comfort studies related to energy consumption in buildings… [1,2,3,5,7,10,11,12,13,22,27,28,29,30,31,32,34,35,45,46,47]. The neural networks used in it introduce flexibility to the formulation of models describing comfort. This flexibility results from the possibility of introducing input arguments of a mathematical model describing a specific comfort, adapted to specific data obtained in a given region of the world for a given group of people. Thanks to this approach, mathematical models using neural networks enable the improvement of individual comfort, resulting in energy savings in the construction sector. However, it should be noted that the savings in energy consumption for the solution described in the article are of primary importance for the buildings that already exist. This is due to the necessity of obtaining data for the networks which are assigned to the learning process. These data are the most reliable in the case of real objects, hence the proposed solution is directed precisely to such objects. It should be noted that it is also possible to model a PMV index with neural networks using metadata. In this case, PMV models using NN should have better consistency of results due to simplifying assumptions used in the models from which the metadata are derived. However, they will be less reliable than the models trained on data from a real object.
The procedure for identifying the best PMV index model described in the article for the analyzed example covered the range of the number of neurons in the hidden layer 1 s 1 50 . This range was chosen to show that above a certain number of neurons in the layer, the network is not suitable for use. In the analyzed example, it took place for s 1 37 .
The procedure is characterized by almost perfect quality of model fit (Setion 3.3.1). This results from the interpretation of the value of determination coefficient R 2 = 0.99998 [74], as well as from the value obtained for all data assigned to the learning process M S E = 0.0000017 (Equation (8)). It is also worth mentioning that the identified model can be treated as one characterized by M a x A R E   % < 4 % (precisely M a x A R E   % < 3.73 % ).
These results for the identified network confirm the correct choice of learning parameters, performance function and transfer functions implemented in the structure. The values of these parameters and types of functions were selected based on the indications from [42,73]. When choosing these parameters and functions in case of PMV index, one should be aware that it depends on the specifics of the data for NN and it is impossible to clearly indicate them before examining them. This is due to the diverse functionality of buildings and external conditions.
The described assessment of network applicability (Section 2.6, Section 3.1 and Section 3.2) showed that it is necessary for modelling the PMV index with the use of NN. The research carried out in terms of robustness (Section 3.1) showed that the vast majority of the analyzed network structures are sensitive to changes in the initial weights and bias values assigned to the training stage. A similar effect was observed for the studies carried out in terms of overfitting and underfitting (Section 3.2). As a consequence of the aforementioned research, it turned out that among the 50 analyzed structures (500 networks), only 20% of them were suitable for application (Section 3.3). It means that without such an assessment of the analyzed data there was as much as an 80% chance of choosing the wrong structure.
The results obtained for the best identified NN structure in comparison with those obtained in other studies [22,40,41] prove that this identification is needed. For example, in [22], satisfactory accuracy was obtained for 92.9% of data for NN modelling PMV index, while in the current study, this accuracy is above 99.9% (precisely 99.9812%). Moreover, the best regression results presented in [40] were R = 0.976, so the goodness of fit of the model was characterized at the value level of R 2 = 0.952 . Comparing the R 2 = 0.99998 obtained from the best structure identification method (Section 2.4.3 and Section 2.6), it means that, thanks to the method proposed in the article, we are able to describe individual thermal comfort more precisely than before. However, this statement is only true for NNs with one hidden layer. The same statement can be drawn for [41], where the differences in the R 2 value between the presented model and the NN obtained in this article were much greater. Therefore, the use of identification compared to similar studies [22,40,41] gave better results in each case.
The above comparative results are relevant for countries’ energy policies, energy saving strategies and sustainability in real estate. The fact is that the accuracy of the simulation of the building’s thermal performance has a significant impact on energy costs, energy consumption and greenhouse gas emissions [89]. After all, it was found that “to overcome these issues, an appropriate thermal comfort model is needed to determine and measure the accurate and precise value of thermal performance” [89]. As the results obtained in the article show, such an appropriate thermal comfort model can be achieved thanks to the proposed method (Section 2.4.3 and Section 2.6). Nowadays, thanks to the use of the appropriate thermal comfort model to predict a building’s energy consumption, the time that is essential for cooling or heating can be decreased by almost 20% [89]. In view of the above, increasing the accuracy of the model to that presented in the article will improve this result even more. Other applications of the PMV model formulated thanks to NN that affect energy consumption are shown in [22,28,38,90,91].

5. Conclusions and Future Research Program

The main results of the research can be summarized as follows:
  • The method presented in the article enables filling the gap identified by the scientific community in comfort studies related to energy consumption in buildings.
  • There are two approaches to filling the identified gap in the case of NNs with one hidden layer (Section 2.4.3): the first for the best quality fit of the model (Equation (5)), and the second one takes into account the quality of the fit with the minimum complexity of the NN model (Equation (7)).
  • When designing the PMV index using NNs with one hidden layer, it is necessary to perform a robustness study (Section 3.1.) along with an overfitting and underfitting study (Section 3.2). Otherwise, there is a high likelihood (in the analyzed case about 80%) that NN will not be usable.
  • NNs with one hidden layer enable PMV index modelling with almost perfect quality of model fit as long as the best structure identification method is used (Section 2.4.3 and Section 2.6).
  • The use of the identification method (Section 2.4.3 and Section 2.6) compared to similar studies with NNs with one hidden layer gave better results in each case.
  • The method presented in the article (Section 2.4.3 and Section 2.6) makes it possible to formulate the equation (Equations (20) and (A1)) characterizing individual thermal comfort for the object under study in terms of its basic functionality.
As for the future research program:
The research presented in the article is to constitute a thematically coherent series of papers presenting how to characterize indoor environment in smart buildings. In the first part of the cycle, the PMV index modelling method was presented along with a description for identifying the best neural network with one hidden layer. The objective of this article from the point of view of the complete cycle is to show that in the case of use of the building for its intended purpose, using NN with one hidden layer, one can get almost perfect quality of the model fit for the PMV index.
Therefore, for such use of the building, it is sufficient to apply classic NNs with one hidden layer. The advantage of this approach is that the model obtained from the presented method (Section 2.4.3 and Section 2.6) does not take into account random cases of using the building, such as repairs, renovation of rooms, etc. The model is characterized by individual thermal comfort for the object under study in terms of its basic functionality. Additionally, thanks to the identified equation (Appendix D) derived from the method of identifying the best network structure, it is possible to interpret the impact of network input arguments on the PMV index describing individual thermal comfort. This interpretation is based on the assessment of the weight and bias values of this equation (Appendix D). This advantage derives from the fact that the models formulated using NN with one hidden layer are much simpler than those designed using DNN.
On the other hand, the disadvantage of this method is the need to properly select data for the network.
In the next parts of the publication cycle, the second approach using DNN for the described topic will be presented, as well as a comparative analysis of the use of NNs and DNNs.

Funding

The funding for research and publication was received from the Cracow University of Technology (Kraków, Poland).

Acknowledgments

The author would like to express his gratitude to A.M. Stręk from Cracow University of Technology for the given advice, and to Jared Langevin from Drexel University who made available the data for NN.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. The Histogram of All Input Data and Reference Output Values (Targets) Assigned to the Learning Process with the Division into Learning Stages

Figure A1. The histogram of all input data values assigned to the learning process with the division into learning stages.
Figure A1. The histogram of all input data values assigned to the learning process with the division into learning stages.
Sustainability 12 06749 g0a1
Figure A2. The histogram of all reference output values (targets) assigned to the learning process with the division into learning stages.
Figure A2. The histogram of all reference output values (targets) assigned to the learning process with the division into learning stages.
Sustainability 12 06749 g0a2

Appendix B. Results of the Selection of Network Learning Parameters

Figure A3 and Figure A4 were created for the network with s{1} = 5, that is, for the structure of the network identified as the best.
Figure A3 shows the results for the implemented performance function (MSE) calculated for all data assigned to the NN learning process in the learning rate function (Table 3). The best result: MSE = 0.0000017 was obtained for the learning rate = 0.01.
Figure A3. The learning rate selection results for the NN learning process obtained for the range from 0.005 to 0.5. The best learning rate was obtained for the value 0.01.
Figure A3. The learning rate selection results for the NN learning process obtained for the range from 0.005 to 0.5. The best learning rate was obtained for the value 0.01.
Sustainability 12 06749 g0a3
Figure A4 shows the results for the implemented performance function (MSE) computed for all data assigned to the NN learning process as a function of momentum (Table 3). The best result: MSE = 0.0000017 was obtained for the momentum of 0.9.
Figure A4. The results of the momentum selection for the NN learning process obtained for the range from 0.1 to 3. The best momentum was obtained for the value of 0.9.
Figure A4. The results of the momentum selection for the NN learning process obtained for the range from 0.1 to 3. The best momentum was obtained for the value of 0.9.
Sustainability 12 06749 g0a4

Appendix C

Figure A5. Sum of Absolute Errors obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the tests stage.
Figure A5. Sum of Absolute Errors obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the tests stage.
Sustainability 12 06749 g0a5

Appendix D

Equation (20) with values rounded to the fourth decimal place presents:
y = d e n o r m m a p m i n m a x ( f 2 a c t i v ( W 1   ×   5 2 · f 1 a c t i v W 5   ×   20 1 · n o r m m a p m i n m a x X + B 5   ×   1 1 + b 1 2 ) )
where
W 8   ×   20 1 = 0.6626 0.1349 0.0214 0.1159 0.0008 0.0035 0.0022 0.1387 0.048 0.8994 0.3612 0.0012 0.0016 0.0014 0.5778 0.0758 0.5964 0.2213 0.001 0.0008 0.0005 0.6094 0.1102 0.0815 0.1577 0.0003 0.0005 0.0005 0.9378 0.4218 0.1802 0.4559 0.0114 0.0157 0.0210 0.0006 0.5462 0.4027 0 0 0 0 0 0.0010 0.4512 0.0613 0 0 0 0 0 0.0035 0.4847 0.1238 0 0 0 0 0.0003 0.0011 0.4844 0.2714 0 0 0 0 0.0001 0.0082 0.3584 0.4755 0 0 0 0 0.0018 0.0358 0.0008 0.0125 0.0006 0.0078 0.0429 0.0008 0.8470 0.0007 0.0037 0.1601 0.0017 0.2521 0.0015 0.0059 0.0433 0.0004 0.0031 0.0003 0.0009 0.1551 0.0073 0.0834 0.0096 0.0555 B 5   ×   1 1 = 1.4125 0.8917 1.206 0.0269 0.9763 W 1   ×   5 2 = 1.5541 1.6641 0.9461 1.3792 0.0773 b 1 2 = 0.8980

References

  1. Romanska-Zapala, A.; Bomberg, M.; Dechnik, M.; Fedorczak-Cisak, M.; Furtak, M. On Preheating of the Outdoor Ventilation Air. Energies 2020, 13, 15. [Google Scholar] [CrossRef] [Green Version]
  2. Romanska-Zapala, A.; Furtak, M.; Fedorczak-Cisak, M.; Dechnik, M. Cooperation of a Horizontal Ground Heat Exchanger with a Ventilation Unit During Winter: A Case Study on Improving Building Energy Efficiency. In Proceedings of the 3rd World Multidisciplinary Civil Engineering, Architecture, Urban Planning Symposium (Wmcaus 2018), Prague, Czech Republic, 18–22 June 2018; pp. 1757–8981. [Google Scholar]
  3. Romanska-Zapala, A.; Furtak, M.; Fedorczak-Cisak, M.; Dechnik, M. Need for Automatic Bypass Control to Improve the Energy Efficiency of a Building through the Cooperation of a Horizontal Ground Heat Exchanger with a Ventilation Unit during Transitional Seasons: A Case Study. In Proceedings of the 3rd World Multidisciplinary Civil Engineering, Architecture, Urban Planning Symposium (Wmcaus 2018), iop Conference Series-Materials Science and Engineering, Prague, Czech Republic, 18–22 June 2018; pp. 1757–8981. [Google Scholar]
  4. Romanska-Zapala, A.; Furtak, M.; Dechnik, M. Cooperation of Horizontal Ground Heat Exchanger with the Ventilation Unit during Summer—Case Study. In Proceedings of the World Multidisciplinary Civil Engineering-Architecture-Urban Planning Symposium—WMCAUS, Prague, Czech Republic, 12–16 June 2017. [Google Scholar]
  5. Fedorczak-Cisak, M.; Kotowicz, A.; Radziszewska-Zielina, E.; Sroka, B.; Tatara, T.; Barnaś, K. Multi-criteria Optimisation of the Urban Layout of an Experimental Complex of Single-family NZEBs. Energies 2020, 13, 1541. [Google Scholar] [CrossRef] [Green Version]
  6. Fedorczak-Cisak, M.; Kowalska-Koczwara, A.; Nering, K.; Pachla, F.; Radziszewska-Zielina, E.; Śladowski, G.; Tatara, T.; Ziarko, B. Evaluation of the Criteria for Selecting Proposed Variants of Utility Functions in the Adaptation of Historic Regional Architecture. Sustainability 2019, 11, 1094. [Google Scholar] [CrossRef] [Green Version]
  7. Romanska-Zapala, A.; Bomberg, M.; Yarbrough, D.W. Buildings with environmental quality management: Part 4: A path to the future NZEB. J. Build. Phys. 2019, 43, 3–21. [Google Scholar] [CrossRef]
  8. Yarbrough, D.W.; Bomberg, M.; Romanska-Zapala, A. Buildings with environmental quality management, part 3: From log houses to environmental quality management zero-energy buildings. J. Build. Phys. 2019, 42, 672–691. [Google Scholar] [CrossRef]
  9. Romanska-Zapala, A.; Bomberg, M.; Fedorczak-Cisak, M.; Furtak, M.; Yarbrough, D.; Dechnik, M. Buildings with environmental quality management, part 2: Integration of hydronic heating/cooling with thermal mass. J. Build. Phys. 2018, 41, 397–417. [Google Scholar] [CrossRef]
  10. Yarbrough, D.W.; Bomberg, M.; Romanska-Zapala, A. On the next generation of low energy buildings. Adv. Build. Energy Res. 2019, 1–8. [Google Scholar] [CrossRef]
  11. Bomberg, M.; Romanska-Zapala, A.; Yarbrough, D. Journey of American Building Physics: Steps Leading to the Current Scientific Revolution. Energies 2020, 13, 1027. [Google Scholar] [CrossRef] [Green Version]
  12. Radziszewska-zielina, E.; Śladowski, G. Proposal of the Use of a Fuzzy Stochastic Network for the Preliminary Evaluation of the Feasibility of the Process of the Adaptation of a Historical Building to a Particular Form of Use. IOP Conf. Ser. Mater. Sci. Eng. 2017, 245, 072029. [Google Scholar] [CrossRef]
  13. Romanska-Zapala, A.; Bomberg, M. Can artificial neuron networks be used for control of HVAC in environmental quality management systems? In Proceedings of the Central European Symposium of Building Physics, Prague, Czech Republic, 23–26 September 2019. [Google Scholar]
  14. Dudzik, M.; Romanska-Zapala, A.; Bomberg, M. A neural network for monitoring and characterization of buildings with Environmental Quality Management, Part 1: Verification under steady state conditions. Energies 2020, 13, 3469. [Google Scholar] [CrossRef]
  15. Bomberg, M.; Romanska-Zapala, A.; Yarbrough, D. Towards Integrated Energy and Indoor Environment Control in Retrofitted Buildings. Energies 2020, 2020070044, Preprints. [Google Scholar]
  16. Klepeis, N.E.; Nelson, W.C.; Ott, W.R.; Robinson, J.P.; Tsang, A.M.; Switzer, P.; Behar, J.V.; Hern, S.C.; Engelmann, W.H. The national human activity pattern survey (NHAPS): A resourcefor assessing exposure to environmental pollutants. J. Exposure Sci. Environ. Epidemiol. 2001, 11, 231–252. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Frontczak, M.; Wargocki, P. Literature survey on how different factorsinfluence human comfort in indoor environments. Build. Environ. 2011, 46, 922–937. [Google Scholar] [CrossRef]
  18. Heim, D.; Janicki, M.; Szczepanska, E. Thermal and visual comfort in a office building with double skin façade. In Proceedings of the 10th International Conference on Healthy Buildings 2012, Brisbane, Australia, 8–12 July 2012; Volume 2, pp. 1801–1806, ISBN 978-162748075-8. [Google Scholar]
  19. Szczepanska-Rosiak, E.; Heim, D.; Gorko, M. Visual comfort under real and theoretical, overcast and clear sky conditions. In Proceedings of the 13th Conference of the International Building Performance Simulation Association, BS 2013, Chambery, France, 26–28 August 2013; pp. 2765–2773. [Google Scholar]
  20. Wyon, D.P.; Andersen, I.; Lundqvist, G.R. The effects of moderateheat stress on mental performance. Scand. J. Work Environ. Health 1979, 5, 352–361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. World Energy Council. World Energy Resources 2013 Survey. 2013. Available online: http://www.worldenergy.org (accessed on 5 August 2018).
  22. Ma, G.; Liu, Y.; Shang, S. A Building Information Model (BIM) and Artificial Neural Network (ANN) Based System for Personal Thermal Comfort Evaluation and Energy Efficient Design of Interior Space. Sustainability 2019, 11, 4972. [Google Scholar] [CrossRef] [Green Version]
  23. Langevin, J.; Gurian, P.; Wen, J. Tracking the human-building interaction: A longitudinal field study of occupant behavior in air-conditioned offices. J. Environ. Psychol. 2015, 42, 94–115. Available online: http://www.sciencedirect.com/science/article/pii/S0272494415000225 (accessed on 15 April 2020). [CrossRef]
  24. ISO 7730:2005. Ergonomics of the Thermal Environment Analytical Determination and Interpretation of Thermal, Comfort Using Calculation of the PMV and PPD Indices and Local Thermal Comfort Criteria; International Standard Organization: Geneva, Switzerland, 2005; Available online: https://www.researchgate.net/publication/306013139_Ergonomics_of_the_thermal_environment_Determination_of_metabolic_rate (accessed on 10 May 2019).
  25. ISO 17772-1:2017. Energy Performance of Buildings—Indoor Environmental Quality—Part 1: Indoor Environmental Input Parameters for the Design and Assessment of Energy Performance of Buildings; ISO: Geneva, Switzerland, 2017.
  26. Mohamed Sahari, K.S.; Abdul Jalal, M.F.; Homod, R.Z.; Eng, Y.K. Dynamic indoor thermal comfort model identification based on neural computing PMV index. 4th International Conferenceon Energyand Environment 2013 (ICEE2013). IOP Publishing. IOP Conf. Ser. Earth Environ. Sci. 2013. [Google Scholar] [CrossRef]
  27. Sudoł-Szopińska, I.; Chojnacka, A. Determining Thermal Comfort Conditions in Rooms with the PMV and PPD Indices (EN); Określanie Warunków Komfortu Termicznego w Pomieszczeniach za Pomocą Wskaźników PMV i PPD (PL); Bezpieczeństwo Pracy: Nauka i praktyka; Centralny Instytut Ochrony Pracy—Państwowy Instytut Badawczy: Warsaw, Poland, 2007; Volume 5, pp. 19–23. [Google Scholar]
  28. Radziszewska-Zielina, E.; Czerski, P.; Grześkowiak, W.; Kwaśniewska-Sip, P. Comfort of Use Assessment in Buildings with Interior Wall Insulation based on Silicate and Lime System in the Context of the Elimination of Mould Growth. Arch. Civil Eng. 2020, 66, 2. [Google Scholar]
  29. U.S. Department of Energy. Buildings Energy Data Book. 2012. Available online: http://buildingsdatabook.eren.doe.gov/DataBooks.aspx (accessed on 15 August 2018).
  30. Ferreira, P.M.; Sergio, M.S.; Ruano, A.; Negrier, A.; Eusébio, C. Neural Network PMV Estimation for Model-Based Predictive Control of HVAC Systems. In Proceedings of the WCCI 2012 IEEE World Congress on Computational Intelligence, Brisbane, Australia, 10–15 June 2012. [Google Scholar] [CrossRef] [Green Version]
  31. Jin, W.; Feng, Y.; Zhou, N.; Zhong, X. A Household Electricity Consumption Algorithm with Upper Limit. In Proceedings of the 2014 International Conference on Wireless Communication and Sensor Network, Wuhan, China, 13–14 December 2014; WCSN 2014 Organizing Committee: Beijing, China, 2014; pp. 431–434. [Google Scholar]
  32. Dexter, A. Intelligent buildings: Fact or fiction? HVAC&R Res. 1996, 2, 105–106. [Google Scholar]
  33. Radziszewska-Zielina, E.; Rumin, R. Analysis of investment profitability in renewable energy sources as exemplified by a semi-detached house. In Proceedings of the International Conference on the Sustainable Energy and Environment Development, Kraków, Poland, 17–19 May 2016; SEED; AGH; Center of Energy: Kraków, Poland. [Google Scholar]
  34. Ding, Y.; Tian, Q.; Fu, Z.; Li, M.; Zhu, N. Influence of indoor design air parameters on energy consumption of heating and air conditioning. Energy Build. 2013, 56, 78–84. [Google Scholar] [CrossRef]
  35. Romanska-Zapala, A.; Kowalska-Koczwara, A.; Korchut, A.; Stypula, K.; Melcer, J.; Kotrasova, K. Psychomotor conditions of bus drivers subjected to noise and vibration in the working environment. In Proceedings of the MATEC Web of Conferences, Conference on Dynamics of Civil Engineering and Transport Structures and Wind Engineering (DYN-WIND), Trstena, Slovakia, 21–25 May 2017. [Google Scholar] [CrossRef] [Green Version]
  36. Korchut, A.; Korchut, W.; Kowalska-Koczwara, A.; Romanska-Zapala, A.; Stypula, K.; Vestroni, F.B.E.; Romeo, F.; Gattulli, V. The relationship between psychomotor efficiency and selected personality traits of people exposed to noise and vibration stimuli. Procedia Eng. 2017, 199, 200–205. [Google Scholar] [CrossRef]
  37. Korchut, A.; Kowalska-Koczwara, A.; Romanska-Zapala, A.; Stypula, K. Relationship Between Psychomotor Efficiency and Sensation Seeking of People Exposed to Noise and Low Frequency Vibration Stimuli. In Proceedings of the IOP Conference Series-Materials Science and Engineering, World Multidisciplinary Civil Engineering-Architecture-Urban Planning Symposium (WMCAUS), Prague, Czech Republic, 12–16 June 2017. [Google Scholar]
  38. Liu, W.; Lian, Z.; Zhao, B. A neural network evaluation model for individual thermal comfort. Energy Build. 2007, 39, 1115–1122. [Google Scholar] [CrossRef]
  39. Kim, J.; Zhou, Y.; Schiavon, S.; Raftery, P.; Brager, G. Personal comfort models: Predicting individuals’ thermal preference using occupant heating and cooling behavior and machine learning. Build. Environ. 2018, 129, 96–106. [Google Scholar] [CrossRef] [Green Version]
  40. Von Grabe, J. Potential of artificial neural networks to predict thermal sensation votes. Appl. Energy 2016, 161, 412–424. [Google Scholar] [CrossRef]
  41. Buratti, C.; Vergoni, M.; Palladino, D. Thermal comfort evaluation within non-residential environments: Development of Artificial Neural Network by using the adaptive approach data. 6th International Building Physics Conference, IBPC 2015. Energy Procedia 2015, 78, 2875–2880. [Google Scholar] [CrossRef] [Green Version]
  42. Zocca, V.; Spacagna, G.; Slater, D.; Roelants, P. Python Deep Learning; Packt Publishing: Birmingham, UK, 2017; ISBN1 1786464454. ISBN2 978-1786464453. [Google Scholar]
  43. Fanger, P. Thermal Comfort. Analysis and Applications in Environmental Engineering; Danish Technical Press: Copenhagen, Denmark, 1970; Available online: https://www.researchgate.net/publication/35388098_Thermal_Comfort_Analysis_and_Applications_in_Environment_Engeering (accessed on 20 October 2018).
  44. ANSI; ASHRAE. ANSI/ASHRAE 55–2013: Thermal Environmental Conditions for Human Occupancy; American Society of Heating; Refrigerating and Air Conditioning Engineers: Atlanta, GA, USA, 2013. [Google Scholar]
  45. Sharma, A.; Tiwari, R. Evaluation of data for developing an adaptive model of thermal comfort and preference. Environmentalist 2007, 27, 73–81. [Google Scholar] [CrossRef]
  46. Nicol, J.; Humphreys, M. Adaptive thermal comfort and sustainable thermal standards for buildings. Energy Build. 2002, 34, 563–572. [Google Scholar] [CrossRef]
  47. Auenberg, F.; Stein, S.; Rogers, A. A personalised thermal comfort model using a Bayesian Network. IJCAI 2015, 2015, 2547–2553. [Google Scholar]
  48. Van Hoof, J. Forty years of Fanger’s model of thermal comfort: Comfort for all? Indoor Air 2008, 18, 182–201. [Google Scholar] [CrossRef]
  49. Alfano, F.; Palella, B.; Riccio, G. The role of measurement accuracy on the thermal environment assessment by means of PMV index. Build. Environ. 2011, 46, 1361–1369. [Google Scholar] [CrossRef]
  50. Neto, A.; Fiorelli, F.A.S. Comparison between detailed model simulation and artificial neural network for forecasting building energy consumption. Energy Build. 2008, 40, 2169–2176. [Google Scholar] [CrossRef]
  51. Kalogirou, S.; Bojic, M. Artificial neural networks for the prediction of the energy consumption of a passive. Energy 2000, 25, 479–491. [Google Scholar] [CrossRef]
  52. Javeed Nizami, S.; Al-Garni, A. Forecasting electric energy consumption using neural networks. Energy Policy 1995, 23, 1097–1104. [Google Scholar] [CrossRef]
  53. Yokoyama, R.; Wakui, T.; Satake, R. Prediction of energy demands using neural network with model identification by global optimization. Energy Convers. Manag. 2009, 50, 319–327. [Google Scholar] [CrossRef]
  54. Buratti, C.; Barbanera, M.; Palladino, D. An original tool for checking energy performance and certification of buildings by means of Artificial Neural Networks. Appl. Energy 2014, 120, 125–132. [Google Scholar] [CrossRef]
  55. Buratti, C.; Lascaro, E.; Palladino, D.; Vergoni, M. Building behavior simulation by means of Artificial Neural Network in summer conditions. Sustainability 2014, 6, 5339–5353. [Google Scholar] [CrossRef] [Green Version]
  56. Chen, Y.; Liu, J.; Li, X.; Yin, B.; Wu, X. HVAC System Energy Consumption Prediction of Green Office Building Based on ANN Method. Build. Energy Sav. 2017, 10, 1–5. [Google Scholar]
  57. Dutta, N.N.; Das, T. Artificial intelligence techniques for energy efficient H.V.A.C. system design. In Proceedings of the International Conference on Emerging Technologies for Sustainable and Intelligent HVAC&R Systems, Kolkata, India, 27–28 July 2018. [Google Scholar]
  58. Marvuglia, A.; Messineo, A.; Nicolosi, G. Coupling a neural network temperature predictor and a fuzzy logic controller to perform thermal comfort regulation in an office building. Build. Environ. 2014, 72, 287–299. [Google Scholar] [CrossRef]
  59. Palladino, D.; Lascaro, E.; Orestano, F.C.; Barbanera, M. Artificial Neural Networks for the Thermal Comfort Prediction in University Classrooms: An Innovative Application of Pattern Recognition and Classification Neural Network. In Proceedings of the 17th CIRIAF National Congress Sustainable Development, Human Health and Environmental Protection, Perugia, Italy, 6–7 April 2017. [Google Scholar]
  60. Zhao, Y.; Genovese, P.; Li, Z. Intelligent Thermal Comfort Controlling System for Buildings Based on IoT and AI. Future Internet 2020, 12, 30. [Google Scholar] [CrossRef] [Green Version]
  61. Ghahramani, A.; Galicia, P.; Lehrer, D.; Varghese, Z.; Wang, Z.; Pandit, Y. Artificial Intelligence for Efficient Thermal Comfort Systems: Requirements, Current Applications and Future Directions. Front. Built Environ. 2020. [Google Scholar] [CrossRef]
  62. I.O. for Standardization (ISO). ISO 7730: Moderate Thermal Environments—Determination of the PMV and PPD Indices and Specification of the Conditions for Thermal Comfort; International Organization for Standardization: Geneva, Switzerland, 1994. [Google Scholar]
  63. Kang, J.; Kim, Y.; Kim, H.; Jeong, J.; Park, S. Comfort sensing system for indoor environment. In Proceedings of the International Conference on Solid State Sensors and Actuators, Chicago, IL, USA, 19 June 1997; pp. 311–314. [Google Scholar]
  64. Bedford, T.; Warner, C. The globe thermometer in studies of heating and ventilation. J. Hyg. (Lond.) 1934, 34, 458–473. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Cena, K.; Clark, J. Bioengineering, Thermal Physiology and Comfort, Ser. Studies in Environmental Science; Elsevier: Amsterdam, The Netherlands, 1981; Volume 10. [Google Scholar]
  66. Demuth, H.; Beale, M.; Hagan, M. Neural Network Toolbox 6 User’s Guide; The MathWorks Inc.: Natick, MA, USA, 2009. [Google Scholar]
  67. Mathworks Documentation: Mapminmax. Available online: https://www.mathworks.com/help/deeplearning/ref/mapminmax.html (accessed on 21 February 2019).
  68. Matlab and Automatic Target Normalization: Mapminmax. Don’t Trust Your Matlab Framework! Available online: https://neuralsniffer.wordpress.com/2010/10/17/matlab-and-automatic-target-normalization-mapminmax-dont-trust-your-matlab-framework/ (accessed on 21 February 2019).
  69. Dudzik, M.; Mielnik, R.; Wrobel, Z. Preliminary analysis of the effectiveness of the use of artificial neural networks for modelling time-voltage and time-current signals of the combination wave generator. In Proceedings of the 2018 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (Speedam), Amalfi, Italy, 20–22 June 2018. [Google Scholar]
  70. Dudzik, M.; Stręk, A.M. ANN Architecture Specifications for Modelling of Open-Cell Aluminum under Compression. Math. Probl. Eng. 2020, 2020. [Google Scholar] [CrossRef] [Green Version]
  71. Szymenderski, J.; Typańska, D. Control model of energy flow in agricultural biogas plant using SCADA software. In Proceedings of the 17th International Conference Computational Problems of Electrical Engineering (CPEE), Sandomierz, Poland, 9–11 October 2016; pp. 1–4. [Google Scholar] [CrossRef]
  72. Budnik, K.; Szymenderski, J.; Walowski, G. Control and Supervision System for Micro Biogas Plant. In Proceedings of the 19th International Conference Computational Problems of Electrical Engineering, Banska Stiavnica, Slovakia, 9–12 September 2018; pp. 1–4. [Google Scholar] [CrossRef]
  73. Hagan, M.; Demuth, H.; Beale, M.; De Jesus, O. Neural Network Design, 2nd ed. 2014; eBook.
  74. Dudzik, M. Contemporary Methods of Designing, Validation and Modeling of the Phenomena of Electrical Traction (En), Współczesne Metody Projektowania, Weryfikacji Poprawności I Modelowania Zjawisk Trakcji Elektrycznej (PL): Monochart, Politechnika Krakowska im. Tadeusza Kościuszki. Kraków, Wydaw. PK, 2018. 187 s., Monografie Politechniki Krakowskiej. Inżynieria Elektryczna i Komputerowa; Politechnika Krakowska: Krakow, Poland, 2018; ISBN 978-83-65991-28-7. [Google Scholar]
  75. Tomczyk, K. Special signals in the Calibration of Systems for Measuring Dynamic Quantities. Measurement 2014, 49, 148–152. [Google Scholar] [CrossRef]
  76. Layer, E.; Tomczyk, K. Determination of Non-Standard Input Signal Maximizing the Absolute Error. Metrol. Meas. Syst. 2009, XVII, 199–208. [Google Scholar]
  77. Tomczyk, K. Levenberg-Marquardt Algorithm for Optimization of Mathematical Models according to Minimax Objective Function of Measurement Systems. Metrol. Meas. Syst. 2009, XVI, 599–606. [Google Scholar]
  78. Tomczyk, K.; Piekarczyk, M.; Sokal, G. Radial Basis Functions Intended to Determine the Upper Bound of Absolute Dynamic Error at the Output of Voltage-Mode Accelerometers. Sensors 2019, 19, 1–15. [Google Scholar] [CrossRef] [Green Version]
  79. Tomczyk, K. Impact of uncertainties in accelerometer modelling on the maximum values of absolute dynamic error. Measurement 2016, 80, 71–78. [Google Scholar] [CrossRef]
  80. Latka, D.; Matysek, P. Determination of Mortar Strength in Historical Brick Masonry Using the Penetrometer Test and Double Punch Test. Materials 2020, 13, 2873. [Google Scholar] [CrossRef]
  81. Latka, D.; Matysek, P. The estimation of compressive stress level in brick masonry using the flat-jack method. International Conferenceon analytical models and new concepts in concrete and masonry structures. Procedia Eng. 2017, 193, 266–272. [Google Scholar] [CrossRef]
  82. [email protected], Langevin Data Legend. One Year Occupant Behavior/Environment Data for Medium U.S. Office. Available online: https://openei.org/datasets/dataset/one-year-behavior-environment-data-for-medium-office (accessed on 16 April 2020).
  83. Madsen, K.; Nielsen, H.; Tingleff, O. Methods for Non-Linear Least Squares Problems, 2nd ed.; Informatics and Mathematical Modelling Technical University of Denmark: Lyngby, Denmark, 2004; Available online: http://www2.imm.dtu.dk/pubdb/views/edoc\_download.php/3215/pdf/imm3215.pdf (accessed on 15 October 2018).
  84. Philip, S. Pearson’s correlation coefficient. BMJ 2012, 345, e4483. [Google Scholar] [CrossRef] [Green Version]
  85. Dudzik, M.; Drapik, S.; Jagiello, A.; Prusak, J. The selected real tramway substation overload analysis using the optimal structure of an artificial neural network. In Proceedings of the 2018 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM), Amalfi, Italy, 20–22 June 2018. [Google Scholar]
  86. Radziszewska-Zielina, E.; Kania, E.; Śladowski, G. Problems of the Selection of Construction Technology for Structures of Urban Aglomerations. Arch. Civil Eng. 2018, 64, 55–71. [Google Scholar] [CrossRef]
  87. Nanthakumar, C.; Vijayalakshmi, S. Construction of inter quartile range (IRQ) control chart using process capability for mean using range. Int. J. Mod. Sci. Eng. Technol. 2015, 2, 8. [Google Scholar]
  88. Estep, D.; Larson, M.G.; Williams, R. Estimating the Error of Numerical Solutions of Systems of Reaction–Diffusion Equations; Memoirs of the American Mathematical Society; American Mathematical Society: Providence, RI, USA, 2000. [Google Scholar] [CrossRef]
  89. Albatayneh, A.; Alterman, D.; Page, A.; Moghtaderi, B. The Impact of the Thermal Comfort Models on the Prediction of Building Energy Consumption. Sustainability 2018, 10, 3609. [Google Scholar] [CrossRef] [Green Version]
  90. Ruano, A.E.; Ferreira, P.M. Neural Network based HVAC Predictive Control. In Proceedings of the Preprints of the 19th World Congress the International Federation of Automatic Control, Cape Town, South Africa, 24–29 August 2014; pp. 3617–3622. [Google Scholar]
  91. Sowa, S. Lighting control systems using daylight to optimise energy efficiency of the building. In Proceedings of the 2019 Progress in applied electrical engineering (PAEE), Koscielisko, Poland, 17–21 June 2019. [Google Scholar]
Figure 1. The course of Relative Errors made by the network for the best possible structure (Equation (5)) of feedforward type with one hidden layer. Results obtained for the case without learning data selection.
Figure 1. The course of Relative Errors made by the network for the best possible structure (Equation (5)) of feedforward type with one hidden layer. Results obtained for the case without learning data selection.
Sustainability 12 06749 g001
Figure 2. The general structure of a feedforward neural network with one hidden layer. All symbols are explained in the text.
Figure 2. The general structure of a feedforward neural network with one hidden layer. All symbols are explained in the text.
Sustainability 12 06749 g002
Figure 3. The algorithm for identification of the best possible value of s 1 in the feedforward neural network with one hidden layer model. On the left side: the parent procedure P1; on the right side: the nested procedure P2.
Figure 3. The algorithm for identification of the best possible value of s 1 in the feedforward neural network with one hidden layer model. On the left side: the parent procedure P1; on the right side: the nested procedure P2.
Sustainability 12 06749 g003
Figure 4. An example of original data (targets) distribution for training, validation and tests stage drawn for NN input arguments x 1 , x 2 .
Figure 4. An example of original data (targets) distribution for training, validation and tests stage drawn for NN input arguments x 1 , x 2 .
Sustainability 12 06749 g004
Figure 5. Maximum Absolute Relative Error obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Figure 5. Maximum Absolute Relative Error obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Sustainability 12 06749 g005
Figure 6. Sum of Absolute Errors ( S A E ) obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Figure 6. Sum of Absolute Errors ( S A E ) obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Sustainability 12 06749 g006
Figure 7. Mean Absolute Error obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Figure 7. Mean Absolute Error obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the validation stage.
Sustainability 12 06749 g007
Figure 8. Sum of Absolute Errors obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the tests stage.
Figure 8. Sum of Absolute Errors obtained for the neural network structure with a certain number of neurons in a hidden layer. Results obtained for the tests stage.
Sustainability 12 06749 g008
Figure 9. Sum of Absolute Errors obtained for the neural network structure for the range 3 > s 1 50 . Results obtained for the tests stage.
Figure 9. Sum of Absolute Errors obtained for the neural network structure for the range 3 > s 1 50 . Results obtained for the tests stage.
Sustainability 12 06749 g009
Figure 10. Mean Absolute Error obtained for the neural network structure for the range 3 > s 1 50 . Results obtained for the tests stage.
Figure 10. Mean Absolute Error obtained for the neural network structure for the range 3 > s 1 50 . Results obtained for the tests stage.
Sustainability 12 06749 g010
Figure 11. Maximum Absolute Relative Error obtained for the neural network structure with the following numbers of neurons in a hidden layer s 1 = 5 ,   10 ,   11 ,   16 ,   19 ,   21 23 ,   26 ,   36 . Results obtained for the tests stage.
Figure 11. Maximum Absolute Relative Error obtained for the neural network structure with the following numbers of neurons in a hidden layer s 1 = 5 ,   10 ,   11 ,   16 ,   19 ,   21 23 ,   26 ,   36 . Results obtained for the tests stage.
Sustainability 12 06749 g011
Figure 12. Performance function values obtained during the learning process for the best analyzed neural network.
Figure 12. Performance function values obtained during the learning process for the best analyzed neural network.
Sustainability 12 06749 g012
Figure 13. Gradient, momentum, validation check values obtained during the learning process for the best analyzed neural network.
Figure 13. Gradient, momentum, validation check values obtained during the learning process for the best analyzed neural network.
Sustainability 12 06749 g013
Figure 14. Error histograms obtained during the learning process for the best analyzed neural network.
Figure 14. Error histograms obtained during the learning process for the best analyzed neural network.
Sustainability 12 06749 g014
Figure 15. Relative Error histograms with 40 bins obtained during the learning process for the best analyzed neural network.
Figure 15. Relative Error histograms with 40 bins obtained during the learning process for the best analyzed neural network.
Sustainability 12 06749 g015
Figure 16. Relative Error in accordance with measured sample number obtained during the learning process for the best analyzed neural network.
Figure 16. Relative Error in accordance with measured sample number obtained during the learning process for the best analyzed neural network.
Sustainability 12 06749 g016
Figure 17. Correlation plots between a network’s outputs and targets for all learning stages separately and combined, obtained for the best analyzed neural network.
Figure 17. Correlation plots between a network’s outputs and targets for all learning stages separately and combined, obtained for the best analyzed neural network.
Sustainability 12 06749 g017
Figure 18. An example of the outputs of the network y NN i on the background of y i (targets) drawn for the input arguments x 1 and x 2 (see Table 1).
Figure 18. An example of the outputs of the network y NN i on the background of y i (targets) drawn for the input arguments x 1 and x 2 (see Table 1).
Sustainability 12 06749 g018
Table 1. Elements of the input arguments matrix of the X network together with their description.
Table 1. Elements of the input arguments matrix of the X network together with their description.
Input ArgumentCategoryNameTypeUnits (if Applicable)Range of the Variable
x 1 EnvironmentIndoor ambient temp.Continuous°C[16.76, 25.79]
x 2 EnvironmentIndoor relative humidityContinuous%[14.25, 72.57]
x 3 EnvironmentIndoor air velocityContinuousm/s[0.026, 0.031]
x 4 EnvironmentIndoor mean radiant temp.Continuous°C[16.76, 25.79]
x 5 EnvironmentIndoor CO2Continuousppm[1.2, 876.7]
x 6 EnvironmentOutdoor ambient temp.Continuous°C[–11, 35]
x 7 EnvironmentOutdoor relative humidityContinuous%[21, 100]
x 8 EnvironmentOutdoor air velocityContinuousm/s[0, 12.5]
x 9 Personal characteristicsClothing levelContinuousCLO[0.22, 0.99]
x 10 Personal characteristicsClothing level (+ chair)ContinuousCLO[0.32, 1.09]
x 11 Personal characteristicsGenderDiscrete--2
x 12 Personal characteristicsAgeDiscreteYears32
x 13 Personal characteristicsOffice typeDiscrete--3
x 14 Personal characteristicsFloor numberDiscrete--1
x 15 BehaviorCurrent thermostat cooling setpointContinuous°C[15.95, 24.13]
x 16 BehaviorBase thermostat cooling setpointContinuous°C[23.88, 25.55]
x 17 BehaviorCurrent thermostat heating setpointContinuous°C[22.50, 26.66]
x 18 BehaviorBase thermostat heating setpointContinuous°C[15.55, 24.44]
x 19 GeneralOccupancy 1Discrete--[0, 1]
x 20 GeneralOccupancy 2Discrete--[0, 1]
Table 2. Description of the output value of the neural network.
Table 2. Description of the output value of the neural network.
Output ValueCategoryNameTypeUnits (If Applicable)
y MODEL Predicted Mean Vote (PMV)ContinuousLimited to [−3,3]
Table 3. Learning parameters for each a p p r o a c h .
Table 3. Learning parameters for each a p p r o a c h .
Learning ParameterValue
performance function goal0
minimum performance gradient10−10
maximum validation failures12
maximum number of epochs to train100,000
learning rate0.01
momentum0.9
Table 4. Maximum Absolute Relative Error obtained for the tests stage for neural network structures with s 1 = 5 .
Table 4. Maximum Absolute Relative Error obtained for the tests stage for neural network structures with s 1 = 5 .
M a x A R E T E S T
s 1 5
a p p r o a c h   1 0.106
a p p r o a c h   2 0.029
a p p r o a c h   3 0.025
a p p r o a c h   4 0.043
a p p r o a c h   5 0.018
a p p r o a c h   6 0.070
a p p r o a c h   7 0.047
a p p r o a c h   8 0.024
a p p r o a c h   9 0.031
a p p r o a c h   10 0.072
Table 5. Coefficient of determination obtained for the best analyzed neural network.
Table 5. Coefficient of determination obtained for the best analyzed neural network.
R 2 Value
Training stage0.99998
Validation stage0.99999
Testing stage0.99999
All data0.99998

Share and Cite

MDPI and ACS Style

Dudzik, M. Towards Characterization of Indoor Environment in Smart Buildings: Modelling PMV Index Using Neural Network with One Hidden Layer. Sustainability 2020, 12, 6749. https://doi.org/10.3390/su12176749

AMA Style

Dudzik M. Towards Characterization of Indoor Environment in Smart Buildings: Modelling PMV Index Using Neural Network with One Hidden Layer. Sustainability. 2020; 12(17):6749. https://doi.org/10.3390/su12176749

Chicago/Turabian Style

Dudzik, Marek. 2020. "Towards Characterization of Indoor Environment in Smart Buildings: Modelling PMV Index Using Neural Network with One Hidden Layer" Sustainability 12, no. 17: 6749. https://doi.org/10.3390/su12176749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop