Smart Management Consumption in Renewable Energy Fed Ecosystems †

Advances in embedded electronic systems, the development of new communication protocols, and the application of artificial intelligence paradigms have enabled the improvement of current automation systems of energy management. Embedded devices integrate different sensors with connectivity, computing resources, and reduced cost. Communication and cloud services increase their performance; however, there are limitations in the implementation of these technologies. If the cloud is used as the main source of services and resources, overload problems will occur. There are no models that facilitate the complete integration and interoperability in the facilities already created. This article proposes a model for the integration of smart energy management systems in new and already created facilities, using local embedded devices, Internet of Things communication protocols and services based on artificial intelligence paradigms. All services are distributed in the new smart grid network using edge and fog computing techniques. The model proposes an architecture both to be used as support for the development of smart services and for energy management control systems adapted to the installation: a group of buildings and/or houses that shares energy management and energy generation. Machine learning to predict consumption and energy generation, electric load classification, energy distribution control, and predictive maintenance are the main utilities integrated. As an experimental case, a facility that incorporates wind and solar generation is used for development and testing. Smart grid facilities, designed with artificial intelligence algorithms, implemented with Internet of Things protocols, and embedded control devices facilitate the development, cost reduction, and the integration of new services. In this work, a method to design, develop, and install smart services in self-consumption facilities is proposed. New smart services with reduced costs are installed and tested, confirming the advantages of the proposed model.


Introduction
Building and home automation systems use open communication standards and interfaces that can integrate a wide choice of different control disciplines like heating, ventilation, air conditioning, illumination safety features, and control equipment. Pre-existing buildings, however, do not usually have these advanced systems. Generally, each type of installation offers specific services: Heating Ventilation and Air Conditioning (HVAC), security cameras and sensors, comfort, user interfaces, computer asset management, etc. When an energy management system is designed, the different This paper is organized as follows: Section 2 reviews related technologies, Internet of Things (IoT), Artificial Intelligence (AI) algorithms, energy management systems, and the fog and edge computing paradigm. Sections 3 and 4 propose a model and methods to deploy the embedded hardware, IoT protocols, and AI in smart grids. Sections 5 and 6 present the experiments conducted. Finally, Section 8 describes conclusions and future works.

Related Work
This section addresses the research lines related to energy management systems using the latest information, communication, and intelligent control technologies. The integration of the Internet of Things, artificial intelligence paradigms, embedded electronic systems, and energy management systems to optimize its use is revised.

Communication Protocols
The Internet network was designed in a client-server model. The client always starts the request to the server. In IoT communications, the server needs to push data to a client without the client first making a request. Software developers have come up with some techniques to overcome this challenge. New protocols adapted for this new way of working have been created and developed in recent years [1]. These protocols are used to optimize communication services. IoT needs new protocols adapted to the communication requirements. These new protocols are developed offering different options in different contexts. IoT has now a wide range of applications [2][3][4][5]. One of the most widely-used protocols, both at the local network and the Internet network level, is the Message Queue Telemetry Transport protocol (MQTT). In this work, this is the protocol that is used as a communication channel between the different nodes. MQTT [6] is a Machine-to-Machine (M2M)/"Internet of Things" connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is limited.

Energy Management Systems
The Energy Management Systems (EMS) can be classified into vertical and horizontal applications. On the one hand, vertical EMS are focused on particular applications [7,8], homes [9,10], buildings [11,12], data centers [13,14], smart cities [15,16], etc. Vertical EMS exploit their own features in order to optimize energy generation and consumption, but are not suitable for other applications. On the other hand, horizontal EMS are model-driven proposals, designed in order to be used in different applications or scenarios. Traditionally, three types of control for energy flow management are used: centralized, distributed, and hybrid [17]. A decentralized energy management for a polygeneration microgrid topology was described in [18]. It was based on multi-agent systems and fuzzy cognitive maps. A multi-agent system for a decentralized control architecture for distributed generation was defined in [19]. These multiagent systems were simulated, but did not consider the hardware infrastructure required to be deployed in a real scenario. Recent proposals related to EMS were based on cloud, fog, and edge computing paradigms. An energy management system as a service was proposed in [20] over the fog computing paradigm. The main features of this proposal were derived from the fog computing paradigm: interoperability, scalability, adaptability, and connectivity. To achieve these features, it used: low-cost/-power devices, open software/hardware infrastructure, and energy management software developed as a SOAPweb service. They presented two case study: home energy management and microgrid energy management. This proposal was more appropriate for small facilities than large facilities, due to the advantages of the fog computing paradigm not being suitable for large facilities, because it requires a large number of fog computing nodes. A fog-based architecture for EMS was proposed in [21]. The architecture was composed of three layers: home gateways, fog nodes, and cloud servers. Home gateways acted as collectors of energy consumption and the user interface. Fog nodes were the retail energy market server, which provided energy services to the end-users. Cloud servers were for data storage and high computing power. This proposal did not consider load management in order to reduce power consumption. There are related algorithms for subsystems of EMS. An algorithm for EMS based on multi-layer ant colony optimization for microgrid applications was presented in [22]. The algorithm met the required load demand with minimum cost in a local energy market under five-minute real-time scheduling. The energy load forecasting methodology based on deep neural networks was proposed in [23]. The algorithms tested were variants of Long Short Term Memory (LSTM). The horizontal EMS proposals analyzed were based on different paradigms like multi-agent systems, cloud, fog, and edge computing, according to the aim of each one. Most of the proposals were not general enough and did not include features like predictive consumption/generation, load management, load classification, and predictive maintenance. The architecture proposed offered smart services based on machine learning algorithms, and it was at different layers in order to manage building facilities.

Optimization Systems
In addition to systems based on communication protocols or architectures for energy management systems' development, there were works that focused on optimizing the use of energy. In this line, the interconnection between smart grids was studied to optimize the use of energy. In [24], the energy trade for interconnected microgrids was analyzed. An optimization model based on algorithms was proposed as the contribution. Optimizing the economic cost is the main objective of this type of work. In [25], a cooperative planning of renewable generations for interconnected microgrids was proposed. Other optimization systems were [26,27], which demonstrated the interest during the last few years in this type of approach, where the frameworks and software systems for the interconnection between different microgrids were analyzed.

Findings
The novel aspects of this work are based on the integration of different technologies. The model proposes a method that is capable of making interoperable low-cost hardware technologies with complex artificial intelligence algorithms, using communication protocols that are still emerging. The methods used allow the use of different technologies, as they develop. It can be installed in already built or newly-designed facilities. It can be used in smart grid designs with or without an electrical grid connection. It uses different paradigms of artificial intelligence in a simple architecture based on two node types (edge node and fog node) with flexible functionality with capacities adapted to the proposed objectives. The model used non-proprietary hardware and software and did not depend on any specific platform in the cloud.
In this context, similar systems have as a preference the minimization of the cost of energy using the rates and prices of the energy unit. The proposed model in this work is based on optimization so that the minimum difference between generation and consumption is achieved. This objective reduces the need to use batteries in connected installations. In this case, the cost of installation and maintenance is optimized. In systems where prediction or classification algorithms are applied, the processes are performed for computers and servers. In this work, these algorithms are designed and implemented near physical systems and integrated devices. Regarding the percentages of success in the classification or prediction, the improvements are similar to other AI systems. In systems where prediction or classification algorithms are applied, the processes are performed on computers and servers. In this work, these algorithms are designed and implemented near physical systems and integrated devices. Regarding the percentages of success in the classification or prediction, no significant improvements are achieved since this is not the objective.

Architecture Model
The model proposes a layer-based architecture. Each power installation (building, house, renewable station, etc.) has its own local network, which is integrated into the model according to the architecture shown in Figure 2. All subsystems offer necessary services in management operations. Each subsystem acts in this scenario and can be interoperable with the other subsystems. Horizontal and vertical algorithms are designed and implemented at different levels. Two IoT nodes types (edge and fog) are defined. The edge node is the device closest to the control and operation. This node can implement simple artificial intelligence algorithms that act in the local area where it is installed. It communicates through standard protocols with the rest of the layers. The edge node implements mainly horizontal control algorithms and also vertical communication to other levels. They are designed so that they can function autonomously, decoupling their dependence. The fog node has greater computational capacity than the edge; it can store data and perform the analysis of the operation of energy management, using artificial intelligence algorithms and interfaces that take data globally. Its scope is not linked to the local environment as the edge node; it is greater. It implements vertical algorithms because it receives and propagates services to other levels. The main components of the proposed architecture are defined below.

IoT Nodes: Edge and Fog Levels
Edge and fog IoT nodes allow designing different types of solutions and services. As described above, an edge node device controls the installation locally, that is it manages the sensors, actuators, and interfaces of the environment in which it is installed. According to this definition, the node has control algorithms with support tasks based on artificial intelligence. An edge node connected in a building can define consumption patterns, classify electrical connections, predict consumption, and detect events for predictive maintenance services. All this is in the environment of the building that it controls. The model designs smart grids on groups of houses (already built or not) and generation units that form flexible groups managed by a set of services. These services can be installed on hardware nodes near the installation (edge node) or on a server that manages the network formed (fog node).

•
An edge node is a programmable controller with resources to capture sensor data, with minimum capacity to store data, process basic algorithms with mathematical models for classification and prediction, and with resources to communicate data in a local intranet. It is the device closest to the control and operation units of the facilities.

•
A fog node has a processing and storage architecture with greater capacity than the edge node. It corresponds to a storage and services server that also develops classification and prediction applications at the smart grid level. It can be a PC or a server. Its location does not depend on the place where the control facilities are located.

Edge Nodes
In the field of energy management, the edge node can perform some of the following tasks: • Power consumption and generation data capture: If the node is installed in a consumer electrical panel, it obtains electrical data related to the devices' connection and disconnection. The electrical data can be the intensity, voltage, active and reactive power, power factor, harmonics, and all types of data that can be used to understand the operation of the installation. If the node is installed at a generation point, it obtains electrical data from the energy produced, just as in the case of consumption.

•
Other data type capture: To design prediction or regression models, sometimes the nodes capture another type of data of interest as it could be environmental data: temperature, humidity, radiation, weather conditions, or other types of data related to the operation that can be used by the artificial intelligence models.

•
Filtering and data preprocessing: The captured data must be revised to avoid erroneous entries. They are also normalized to be used as inputs to the management algorithms. Sometimes, the data are preprocessed and filtered with some mathematical transformation (Fourier, wavelets, vector transformation) to obtain information or to reduce the amount of relevant data. • Actuators' control actions: For consumer installations, the edge node installs circuit connection and disconnection actuators. In this way, services that control and optimize the distribution and use the available energy are integrated. These services can be installed in already built facilities or in new construction projects. For generation systems, the node switches and directs production to the points where it produces the greatest benefits and optimizes its use. The management of storage of energy and use without the need to store comprises two critical tasks to be solved by this type of node in the control function. • Classification models: At the local level, the edge node can implement algorithms based on classification and detection models. In consumer facilities, the node uses models trained with connection data during the learning phase to classify different types of consumers. • Prediction models: At the local level, the edge node can implement algorithms based on prediction models. In consumer facilities, the edge node uses past consumption data to predict future consumption. In generation facilities, the node obtains meteorological data and past generation data to predict the generation of the installation in the next hours • Communication processes: The edge nodes communicate with the different levels and layers using IoT protocols. A node can send the capture and classification data to the smart grid management layer, transmit consumption information to the local network, or the status of the actuators to services installed in the cloud.

Fog Nodes
The fog node introduces new capabilities and scopes of application, in relation to the previous node. It implements the same communication protocols as the edge node, but with different functions. A fog node can perform some of the following tasks: • Learning: In these nodes, prediction and classification models are designed and tested to be installed in the different network nodes. The fog nodes participate in the learning phase, where the dataset of the variables that intervene in the energy management processes of the installation are analyzed. ). If part of the information is saved in the cloud, those nodes will only keep the necessary data to be able to work without having to depend on Internet connections. • Data capture: In the same way as edge nodes, these nodes can obtain data and datasets needed to perform network management actions. These nodes capture data that can be used by other devices in the network, such as weather forecast data or energy cost data. • Artificial intelligence algorithms: Algorithms for classification, detection, prediction, or predictive maintenance are designed and implemented for services used in different nodes. Therefore, these algorithms can be applied to all nodes and installations of the smart grid. • Interfaces: In the fog nodes, the Human-Machine Interface (HMI) and Machine-to-Machine interfaces (M2M) are implemented at the smart grid level. Different devices become interoperable using the interfaces developed in these nodes. Figure 4 shows the fog node modules (input, output, processing, data storage, interfaces, and communication processes) and their relationship.

Local Smart-Grid Intranet: Vertical Services
The smart grid layer manages the different nodes and is used as an interface to the cloud services. The tasks it performs are: Interoperability implementation between nodes • Cloud service support In this work, this level develops interoperability tasks between the different nodes and the interface with the cloud.

Cloud Smart Grid: Communication, Interfaces, and Big Data Services
The cloud provides a set of resources to optimize and make interoperability global. The smart grid can connect to other smart grids using cloud services. The cloud computing paradigm provides, among others services, low-cost storage, accessibility, scalability, and interoperability. The cloud is used in a set of utility services. The main services are: • Treatment of a large amount of IoT devicesand Big Data: Smart grids of all sizes collect enormous quantities of complex, fast-moving data that contain value that may give them a competitive edge or lead to better decisions. The cloud is a good option to assist with Big Data workloads. This is because the cloud provides a centralized platform with access to powerful computing infrastructure and inexpensive storage at a relatively low cost. • Cloud data analytics: Numerous providers in the cloud (AWS [28], Google [29], Ubidots [30], and Microsoft [31]) are beginning to offer higher performance storage and analysis by artificial intelligence paradigms using using their platforms. The model can use these resources to process data at this level. Cloud providers already have the technologies in place to deliver their own powerful AI infrastructure to energy data analysis. • Dark data use: Gartner defines dark data [32] as the information assets that organizations collect, process, and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships, and direct monetizing). Thus, organizations often retain dark data for compliance purposes only. Storing and securing data typically incur more expense (and sometimes greater risk) than value. In the electric management model, dark data extraction tools that can identify garbage information versus valuable information must be used to optimize the cloud. The cloud API serves as a gateway or interface that provides direct and indirect cloud infrastructure and software services. Local and cloud smart grids use the API cloud as a Software as a Service (SaaS) (software or application provision connectivity and interaction with a software suite).

Computing Model
Electric management in smart grids (formed by buildings, homes, and generation facilities) must analyze the different subsystems (consumers, distributors, and producers). These subsystems, in automated facilities, are controlled by specialized control technologies. In non-automated facilities, these services do not exist, and the subsystems are controlled electronically and electrically. In both cases, the aim of the model proposed in this work is to create new utilities and solutions to the requirements of energy management for groups of homes and buildings forming a smart grid. The model uses already installed facilities and incorporates new ones. The smart grid can be developed in new or already built facilities.
In the initial phase of the smart grid design, the set of requirements that the electric manager should follow should be established. These requirements are transferred to the computational model, which integrates the intelligent services into the necessary infrastructure. Depending on the services, the computational model will require a set of hardware and software elements. All this process is described in the following phases proposed in the model ( Figure 5).  Human-machine and machine-machine interfaces.

3.
Learning: Vertical procedures are used to define the processing and communication services, as well as the necessary horizontal algorithms in each of the nodes. Machine learning patterns and artificial intelligence models are designed in this phase. To implement detection, classification, and prediction processes, it is necessary to address a learning phase using the data captured and treated. This learning phase depends on the type of application that must be solved.
• In power load classification, the learning process develops a pattern detection model. In this case, the learning process creates a classification model after capturing the patterns that define the type of power loads.

•
In a generation prediction process, the learning process must create a regression model. In the learning process, the task of creating the regression model is performed, which depends on the type of installation.

•
In a predictive maintenance process, a model based on rules that detect singular events must be established.

•
Other processes based on artificial intelligence paradigms are trained in this initial phase of learning.

4.
Operation: Algorithms are installed in embedded devices in horizontal solutions to control local facilities. Control and communication services are developed to transmit data to horizontal and vertical layers. In this phase, after the necessary learning, the control, classification, and prediction algorithms are installed in each of the nodes defined in previous phases. The algorithms are executed, and the installation starts the operation process.
• Capture and filtering processes.

•
Reactive control processes, based on rules. • Supervision processes, classification, detection, or prediction processes, based on artificial intelligence models, communication processes, based on IoT protocols, processes of data storage, through the use of specialized databases, access and interoperability interfaces based on adapted programming paradigms.

5.
Supervision, management, and maintenance: Vertical algorithms show the results of operating processes using HMI interfaces. The smart grid must be maintained and managed: different elements like data, node devices, or electric actuators are analyzed. The data network is administered with specialized computer resources that guarantee operation at this level.

Experimental Design and Results Analysis
The model introduces a layer architecture, the set of processes that can be installed at each level and the devices needed to design and develop a local smart grid. Experimentally, three types of houses were used that had solar generation installed. In these facilities, a smart grid was designed to compare the advantages using the services proposed on the smart grid.
The facilities used in each house were photovoltaic modules of 500, 1000 W or 1500 W. Energy captured in one day was 4950 Wh in winter and 10,550 Wh in summer for 1500 W. For the production estimate, the average Spanish peninsular radiation was taken into account, including a percentage of system losses and a capturewith a minimum of 3 h of peak sun for winter, while in summer, 7 h of sunshine were taken (Figures 6 and 7).  On each chart is included the renewable power produced (green lines).
In Figure 8, different patterns in power consumption and power generation show that there was generated energy that was not used at the time of production. This occurred when the energy generated was greater (green lines) than the energy produced (blue lines). Under these conditions, the system can store or simply not use energy. If it is stored, it requires a battery installation and an electronic system that controls its loading and unloading. Batteries in these cases are necessary, although there is a decrease in performance and an increase in the cost of the installation and its maintenance. Another important feature is the use of energy in each home: if there are consumption patterns, the inhabitants can change habits, be they outside seasons or other types of variations that modify the consumption patterns. When there is a significant variation, the installation may degrade in optimization and quality.
For all the reasons indicated below, a smart grid facility using the model proposed will be able to reuse power generation to optimize and provide value-added services in power management. In this work, a smart grid is designed using consumption and generation patterns obtained through the capture and monitoring of consumption and power generation data. The model develops the phases: smart grid design, data flow, learning, and operation.

Smart Grid Design
The smart grid uses photovoltaic panels already installed in the houses and proposes the predictive control of some home electrical devices that can be automated, such as home appliances, swimming pool purification engines, water tank heating, and other programmable devices. To optimize the renewable resources, a power generation unit that uses wind and photovoltaic energy is added to the smart grid (Figure 9).

Operation Optimization
The target is to determine the optimal operating scenario of the Consumers (C) units in combination with the renewable system (P). The model can propose different optimization functions. In this first approach, the cost of energy is not considered; it is intended to balance production and generation through two levels of optimization: first in each house and, then, the set of network facilities.
Each house (i) has an objective function: Ca (i)jt .A (i)jt ∀i = 1, 2, . . . n with n = house The variables are: F (i) −→ the function objective in house (i) in time t, ∀i = 1, 2, . . . n with n = house. Ca (i)jt −→ power load controlled by programmable actuator j in house (i) in time t,∀i = 1, 2, . . . n ∀j = 1, 2, . . . k with n = house. A (i)jt is one if actuator a (i)jt is connected, in time t. A (i)jt is zero if actuator a (i)jt is not connected, in time t.
For each house, the objective function must be minimized. The objective is to try to make sure that all the energy generated is consumed, in the period of time established, in the house where it is generated. To minimize the function, an algorithm is used that decides the number of programmable loads Ca (i)jt that must be connected to balance consumption with generation(Algorithm 1).
This process is made with a time period of 1h, as the unit of time taken as reference in this study. Each hour, the processing node of each house predicts consumption and generation and programs the connection or disconnection of programmable loads (Algorithm 2). The smart grid has the next objective function: The variables are: C a (i)jt −→ power load controlled by programmable actuator j in house (i), not activated in F (i)t , in time t,∀i = 1, 2, . . . n. A (i)jt is one if actuator a (i)jt is connected, in time t A (i)jt is zero if actuator a (i)jt is not connected, in time t.

Algorithm 2 Minimize function F smt in the smart grid (t = 1 h).
if The sensors, devices, and nodes shown in Table 1 are: • Sensors and actuators: -Power meter and Current Transformer (CT) installed on the main power panels and connected to the edge node.
-Ambient sensors (temperature, humidity, etc.) connected to the edge node. -Control relay installed on power panels already installed or new ones and connected to the edge node.
• Edge nodes: -Embedded devices installed near the operation points where they act.

-
An edge node is a device with a monoprocess with basic data processing (capture, filtering, transmission) or a device with a multi-process capability integrating algorithms for detection, classification, or prediction of the data captured by the node.

-
The node can store small amounts of data and receive data from another level of the architecture (weather forecast, environmental data, ON-OFF operation, etc.).
• Fog nodes: -In these nodes, different services are installed (web servers, databases, API functions, HMI interfaces) that manage the smart grid.

-
Receive and send data from the edge node and cloud services -Send requests to the control nodes, allowing the interoperability of the different subsystems -Develop the application programming interface functions.

-
The fog node needs a computing capacity with the ability to install servers and store data in specialized databases (network-attached storage); in addition, computing capacity to process IA algorithms with data that come from all the lower level nodes and higher level cloud services. Learning resources: AI models' development.

4.
Power distribution optimization: ON-OFF automated control of electric charges.

Data Flow
Sensors, actuators, and programmable devices were deployed in the model at three levels. The experimental work was carried out in houses already built, where the devices captured and treated the data. The flow starts with the data capture to know both the consumption load curve and the power generation. The smart grid had different sensors sources and control data, processing algorithms that make decisions to optimize, communicate, and control, and needed subsystems installed for management, maintenance, analysis, and supervision. In general, the main processes involved are reflected in Figure 10. Four subsystems were established in the determination of the data flow.

•
Consumption subsystem, where the sensors', actuators', and controllers' data in consumption facilities were treated.

•
Renewable energy subsystem, in the same way as in the previous case, but for the generation devices. • Network subsystem, a central subsystem that managed the operation of the network and collected all the data of the different subsystems interoperable. The data of all vertical applications were processed and communicated in this subsystem. • Cloud subsystem, reflecting the set of data sent and received from cloud services. In the experimental work, different consumption patterns were captured. The type of connection that detected and classified the installation depended on the decision-making in the design and first captures of the sensors. There was therefore an initial phase of capturing information from the sensors to develop the necessary datasets. The houses had installed power of 3.3-9 KW. Each house had an individual photovoltaic facility of 500 or 1000 W. The installed generation unit had a wind power subsystem of 1500 W, a photovoltaic subsystem of 500 W, and a battery facility with 2500 Wh/day of capacity. The analysis of the smart grid was based on considering that the energy generated in each house could be shared with the common energy consumption network. In each house, ON-OFF actuators were installed to automate the control of loads using smart control algorithms. The automated loads, controlled in each house, could reach consumptions between 500 and 1000 W. The communication protocols used were based on producer-consumer models (Message Queuing Telemetry Transport (MQTT)) for sensors' and actuators' data and a client-server model (Hypertext Transfer Protocol Secure (HTTPS)) for management, interfaces, storage, and maintenance data.

Learning
Devices, nodes, and data flow offered support resources to design different AI services. The learning processes started with the data obtained by the different sensors and nodes installed. From the capture of consumption data, electricity generation data, along with other sensor data such as climate prediction or environmental conditions could initiate different learning processes. These processes depended on the services designed. In this experiment, the learning processes were: • A pattern recognition subsystem to develop a connection detection and load type. • A model to predict the level of renewable energy generated and the consumption load throughout the day.

•
Automatic control rules of decision-making for the distribution of the load.

Learning Processing: Connection Classifier
In each house or renewable facility, the power sensors were the main elements to capture data. Considering each of the subsystems of Figure 11, training and test dataset were captured to analyze and create classifier systems. These datasets were formed by the first current data that were captured when a connection of an electrical load occurred. The main variable used was electric current (I it ). The time for sampling was every 200 ms, so the first data captured when a connection was detected (data array) characterized the type of load. For this process, a connection capture algorithm was installed in the edge node connected to the electrical data sensor. The array had a size of 15 values, that is lines of this size were captured and analyzed. Figure 12 shows the process of detecting and capturing a load connection. As was observed, subarrays of five values were taken, on which a first calculation was made to detect the connection. Two limit values were set for this operation (lim 1 and lim 2 ), indicated in the figure. If the calculation performed on the subarray detected the connection, the algorithm captured the data in the arrays T 8 , T 9 , and T 10 shown in Figure 12 to compose the final array with 15 data.
A formula was proposed to detect the connections from a certain level of load, according to the limits set.
If the condition on a subarray (I 1t , I 2t , I 3t , I 4t , I 5t ) of current values captured with a sampling time of 200 ms, in time t: was fulfilled, then a new device was connected in the facility. When a new load was detected, it went through a learning or operation process. The frame (array) captured was adapted taking as reference the initial value: Data captured → [I 1 , I 2 , ..., I 15 ] Data adapted → [(I 1 − I 1 ) , (I 2 − I 1 ) , ..., (I 15 − I 1 )] A wavelet transform process was used to normalize and obtain the feature extraction. Wavelet transform allows dimensionality reduction. The wavelet power spectrum using the Haar wavelet has the potential to identify informative features [35]. In the experimental work, a clustering and feature selection method for classification based on the Haar wavelet power spectrum was used. The local wavelet power spectrum at a particular decomposition level was calculated by summing up the squares of wavelet coefficients at that level. For a set of wavelet coefficients C j,k where j is the level of decomposition and k is the order of the coefficient, the wavelet power spectrum is given below: For the Haar wavelet, there would be log 2 (N) levels of decomposition. In the case of the classifier, each array of 15 elements was reduced to four levels using wavelet coefficients. In this case, the classifier would have input vectors with four elements.
In each facility, there were similar devices that introduced the same consumption patterns. For an adequate classification, the criterion of detecting the types of devices that existed by groups was taken. In addition, there were electrical appliances that could have different types of connection, depending on the type of programmed operation mode (washing machine, stove, dishwasher, etc.). Under these conditions, a prior automatic classification process was considered that helped a second supervised classification process to detect the type of connections that were to be detected. The steps to build the load classification model were:

1.
Automatic cluster detection: Devices connections that existed in the installation were captured during representative periods of time. To perform a first analysis, an unsupervised classification method was used (kmeans). This first step was used to obtain information prior to making a supervised clustering decision. It is possible that many facilities do not know a priori what type of connections are produced. With this first analysis, the second step can be designed. In the test facility, 2000 connections were captured for three months. A first analysis based on the unsupervised method ( AffinityPropagation) was performed automatically to detect the different clusters.

2.
Feature extraction and clustering validation: In the initial step, the current types of loads were known and a first classification was made. The detected patterns must be validated in this phase to determine the clusters that should be detected in the classification process. Silhouette and cluster analysis were done in this phase. Model and training datasets clustered were the result.

3.
Testing and learning validation. In this phase, we compared different classification methods such as KNN, SVM, neural networks, etc., to decide which one to use. The model was tested and validated. The final result was a classification algorithm that would be installed and validated in an edge node.
The process indicated above is shown in Figure 13. Once the model was validated, it needed to be implemented in an edge node device, connected to the consumption meter that captured the data. To implement the model, the Python programming language was used with a set of resources and libraries that allowed the implementation. In this process, Python language with the pandas, numpy, and sklearn libraries, among others, was used to obtain unsupervised clusters ( Figure 14). The figure shows the result of applying the affinity method to 200 samples obtained by the electrical sensors. It can be appreciated that the function labeled different groupings that belonged to the same class and others that mixed different classes. For this reason, a supervision stage was introduced, in which an expert reviewed and modified those classes that were not well cataloged. Figure 14. The result of using the unsupervised clustering method showed similar class groupings. They were not the definitive ones, because they had to be revised, modified, and validated. In facilities with many data, this first process is a good approximation to obtain an adequate clustering.

Learning Process to Predict Power Consumption
The experimental work used an electrical consumption curve in a house with 110 m 2 of useful area, with a distribution of 3 bedrooms, a living room, a kitchen, a gallery, 2 bathrooms, and 4 inhabitants. In this process, the consumption data of the last 30 days were captured, measured from hour to hour. These data constituted the input dataset to the regression model used. To create the predictor, a regression tree model was used. Figure 15 shows power consumption hourly during the last 30 days in a residential house captured by edge node processing. The sampling time in this process was t = 1 h. Figure 16 shows the regression tree model obtained.

Learning Process to Predict Renewable Power
In renewable power facilities, the model proposes a method to predict the energy that will be generated, knowing the capacity of the installation itself, the climate zone characteristics, the weather forecast, and the local climate sensors installed. As in the previous case, a learning process was designed. In this new case, a prediction algorithm based on IA paradigms was designed and implemented. Weather forecast and environmental sensors' measurements (temperature, wind, etc.) were captured hourly. The climatic zone had data of the solar energy potential recorded for each season of the year. These data need to be accessible to perform the calculations. The sampling time in this process was t = 1 h. This subsystem was designed according to a prediction based on:

•
The power registered statistically for the area, measured every hour.

•
The weather forecast of the area.

•
The actual generation results captured in the installed solar panels, during the learning time.
With these data, an input dataset to a regression tree was obtained. At this point, this subsystem was still in the learning phase. A heuristic that replaced the process designed to integrate the service in the operation of the smart grid was used. The heuristic was based on taking the statistical data of the area related to the generation, analyzing the weather forecast, and correcting said data according to the climate.

Automatic Control Using Decision Rules
A decision tree calculated, hourly, different forecasts for the facility. The aim was to control power loads and renewable devices to optimize the use of renewable energy. The output of this decision tree was three possible scenarios. The result, every hour, was used as the input to the energy management system. If generation > consumption, electric devices (loads) needed to be connected to exploit energy. If consumption > generation, no control actions were taken. If consumption and generation can be different from the forecast time, control in real time must be implemented. The optimization process aim is shown in Figure 17. The sampling time in this process was t = 1 h. Figure 17. The figure on the left shows the forecast to be that there are hours in which generation will be higher than consumption. It is necessary to connect programmable loads to ensure that the consumption and generation are those of the figure on the right. If the generation is greater than the consumption, as happens at certain times, then the system must act by connecting automated loads. The figure on the right shows the new situation when the automatic activation of other loads occurs. The objective is to optimize the use of the generated energy and minimize the need for batteries (storage).

Experimental Work: Installation and Operation
At this level, the architecture was designed, as well as the functions and characteristics of the devices and nodes. In addition, classification and prediction models that ran on different nodes of the network were tested and validated. The operation processes integrated all the previous ones at different levels and put into operation another type of management, monitoring, and control services. This section shows the final algorithms implemented and their results in the experimental work. The designed services were installed in different edge nodes. One of them was installed in the main electrical box of the houses, another in the electric control panel of the renewable energy generation unit. All nodes communicated with each other using the IoT MQTT protocol. A cloud platform was used to obtain dashboard panels and event-based controls designed using cloud resources. The power consumption facility is shown in Figure 18. This subsystem allowed implementing algorithms with load classification services, consumption prediction, and connection control. The renewable subsystem was used to predict power generation. Figure 18. Experimental sub-system installed in the housing. The node can implement classification, prediction, and control algorithms proposed in the learning and operating phase.

Operation Process to Predict Power Generation and to Control
The system shown in Figure 19 represents the integration of different subsystems on different nodes. To predict renewable generation, a process took the statistical data of the area related to the generation, analyzing the weather forecast and correcting said data according to the climate. With this heuristic, the prediction of the generated power could be made, obtaining the necessary data to feed the control process and complete the optimized control process. Edge nodes calculated consumption and generation. The optimization and control process could be developed in fog or edge nodes that receive the data from generation and consumption units. Figure 19. Processes installed in different devices integrated in the model. The figure shows their integration using a flow diagram. Each process designs and develops the applied services of detection, classification, or prediction in different nodes. These nodes can be edge or fog nodes, depending on the type of objective. The figure shows how the generation node and the consumption node provide the necessary data so that the load activation control process can act or not.
The aim was to control power loads and renewable devices to optimize the use of renewable energy. To control the installation, first of all, a decision tree that detected if generation was greater than consumption was developed. Then, if this event occurred, then a control algorithm decided if it could automatically connect a load to take advantage of that energy. These integrated services are shown in the Figure 19.

Operation Process to Classify Device Connections
The algorithm that detected and classified the connection type is displayed in Figure 20. The detected connection and its classification were stored in a file and were also sent to the UBIDOTSplatform, used as a cloud service. This algorithm was executed in an edge node, shown in Figure 18.  Figure 21 shows how power prediction was developed on the edge node. Hourly, a new dataset with the last data capture was actualized. The algorithm was then executed with the updated dataset to predict consumption in the next hour. This algorithm was executed in an edge node, shown in Figure 18. A model based on regression trees was designed. Every hour, the previous consumptions were analyzed. These data were introduced in the model, and the model made a prediction.

Findings
The major findings of the research were: • The design and implementation of a set of algorithms in different nodes according to the defined architecture: The objective was to propose an architecture that was simple to implement and powerful to develop different services. The experimental results confirmed the objectives. Figure 19 shows the relationship of the different algorithms implemented and in operation in the experimental unit, and Figure 22 shows generation and consumption data on the cloud platform • The design and implementation of a set of classification and prediction algorithms, based on artificial intelligence paradigms, which needed to be designed and implemented in the nodes of the architecture proposed: Figure 22 shows the results of the algorithms in a control panel.
The consumption prediction section, in this figure, shows in orange the energy predicted by the prediction algorithm installed in the edge node that captured consumption data, in the first version of the prediction algorithm, the mean squared error is shown. In green, the real consumption data are shown.

•
The classification module in Figure 22 shows the results of the load detection and classification algorithms implemented in different nodes. In Figure 23, a classification report is shown. The classification precision in this facility was greater than 86%. In Figure 23, a prediction report is shown. The mean squared error was 0.096, during the last three days (64 samples).

•
The data generated by all the algorithms, installed in different nodes, are displayed in the same panel, managed by the fog node and by the cloud. It also shows the ease of the design and installation of the proposed architecture, as a result. • Easy installation and operation as shown on installed devices and algorithms implemented: The economic costs were optimized using prediction algorithms and by minimizing storage use.
If new batteries are not installed and programmable loads are managed, renewable units are easier to install and maintain. In the experimental unit, the solar panels installed amortized in less than four years with the energy produced, without installing batteries. All this was valid for installations connected to the grid, where the objective was to reduce energy dependence and generate savings of 30% with a small repayment period.

•
With the embedded controller, in certain installations, the installation of batteries can be eliminated.
The new controller activated loads automatically and distributed the generated energy. In these small facilities, the cost of amortization of the installation was less than five years. If there were no installation of batteries, the start-up and maintenance would be easier. This advantage would be applicable to homes connected to a grid, with the aim of reducing the energy cost.

Conclusions
A new model of integration for the development of energy management facilities has been proposed in this work. The model was based on the use of IoT communication protocols and artificial intelligence paradigms applied for classification, detection, prediction, and control of consumption and power generation systems. The artificial intelligence algorithms were deployed at two levels: the edge level and the fog level. Each level had different capacities and associated services. These two levels introduced a distributed architecture based on nodes. Each node could play a different role in terms of its computing and storage capabilities. All nodes communicated with each other using IoT communication protocols. The method made different levels of data processing interoperable with devices adapted to the services performed. In this work, algorithms were proposed to predict consumption and electrical generation, in addition to developing a load classification system. All these processes were installed in homes with renewable generation units. The result of these algorithms can be improved by investigating new paradigms of artificial intelligence that adapt better. New algorithms with new services can also be installed on the platform, in the same way as those performed in this work. The experimental results showed the benefits of the model: the ease of the design, installation, and operation, as well as optimized costs for the level of services developed. In future work, we plan the development of an API that integrates all the services developed in the intranet of the smart grid. This API will complete the interface that local and external systems will have to develop new services. Funding: This research was funded by the Industrial Computers and Computer Networks program (Informatica Industrial y redes de Computadores (I2RC)) (2018/2019) funded by the University of Alicante, Wak9 Holding BV company under the eo-TICCproject, and the Valencian Innovation Agency under scientific innovation unit (UCIE Ars Innovatio) of the University of Alicante at https://web.ua.es/es/ars-innovatio/unidad-cientifica-de-innovac ion-ars-innovatio.html.

M2M
Machine-to-Machine API Application Programming Interface EMS Energy Management Systems HVAC Heating Ventilation and Air Conditioning