High Performance IoT Cloud Computing Framework Using Pub/Sub Techniques

: The Internet of Things is attracting attention as a solution to rural sustainability crises, such as slowing income, exports, and growth rates due to the aging of industries. To develop a high‑performance IoT platform, we designed and implemented an IoT cloud platform using pub/sub technologies. This design reduces the difficulty of overhead for management and communication, despite the harsh IoT environment. In this study, we achieved high performance by applying the pub/sub platform with two different characteristics. As the size and frequency of data acquired from IoT nodes increase, we improved performance through MQTT and Kafka protocols and multiple server architecture. MQTT was applied for fast processing of small data, and Kafka was applied for reliable processing of large data. We also mounted various sensors and actuators to measure the data of growth for each device using the protocol. For example, DHT11, MAX30102, WK‑ADB‑K07‑19, SG‑90, and so on. As a result of performance evaluation, the MQTT Kafka platform implemented in this research was found to be effective for use in environments where network bandwidth is limited or a large amount of data is continuously transmitted and received. We realized the performance as follows: the response time for user requests was measured to be within 100 ms on average, data transmission order verification for more than 13 million requests, data processing performance per second on an average of 113,134.89 record/s, and 64,313 requests per second were performed for requests that occurred simultaneously from multiple clients.


Introduction
Most Internet of Things (IoT) companies incorporate various IoT framework technologies to transmit and receive real-time data from sensors and manage them. They can be used to assess and control variables such as temperature, humidity, vibrations, or shocks during product transport [1]. Therefore, the application of the IoT in various sectors, especially in the manufacturing execution system field, can impact resource efficiency and significantly improve production capacity. However, several challenges need to be addressed to adopt IoT [2]. One of the challenges is processing and analyzing vast amounts of data coming from heterogeneous devices [3]. Furthermore, processing all these collected data directly to a central server is inefficient and sometimes impractical due to limited computing, communication, and storage resources, overall energy and cost, and unreliable latency. To address these challenges, here we introduce the concept of an IoT cloud platform, where data processing tasks are pushed to the IoT Cloud. There are two major elements of the platform implemented in this research: Message Queueing Telemetry Transport (MQTT) and Apache Kafka. The MQTT broker is responsible for exchanging messages between various sensors and actuators in the IoT. Kafka reliably sends large amounts of data generated in the IoT to consumers. In this work, we used MQTT and Kafka together to take full advantage of the different characteristics of these platforms. To transmit small data with The overall system architecture mainly consisted of several elements, as shown in Figure 1. First, the sensor installed on the IoT node and the cloud/cluster application responsible for the sensor device are required. Second, an MQTT broker collects data acquired from an IoT node and transmits it to the computing platform. Based on the publish/subscribe model, an MQTT broker maintains multiple subscribers, each of which is subscribed to a particular topic, and forwards the data as they are received. Third, the cloud platform in this research served to connect the MQTT and Kafka protocols. It plays a key role in reducing the burden on IoT devices in charge of sensors by placing the device serving as the corresponding platform physically close to the IoT. The fourth is the Kafka cluster. Kafka clusters communicate using the Kafka protocol. Like MQTT, it uses a pub/sub model and serves to stream large amounts of continuous data from sensors based on events. Finally, a web frontend visualizing the Kafka data is implemented using ThingsBoard [12], which is basically an IoT dashboard platform but was customized in this research. During data processing, we used MongoDB to store data. MongoDB is a document-based database engine that can store and retrieve unstructured data without schema as it is in JSON format, so it is very conveniently used in web-related fields. Since we initially created the data in JSON format in this project, we applied it for convenience.
The whole system consists of an IoT cloud with multiple MQTT clients and multiple nodes, as shown in Figure 2. Each client connects to the IoT cloud platform to send and receive data. The IoT cloud node is mainly composed of two components: the MQTT component and the Kafka server. The MQTT component is a broker and subscriber to the MQTT protocol. This allows for immediate data transfer as well as other operations after receiving the data. In this research, specific data is sent to Kafka after receiving data from a component for reliable data storage and transmission. The data generated by the sensor is processed by Node-MCU and sent to the MQTT broker. The broker then sends it to an IoT cloud platform with multiple subscribers. The platform internally switches from the MQTT protocol to the Kafka protocol and sends it to the Kafka cluster. After receiving data from the Kafka cluster, it forwards the received data to multiple consumers. Figure 3 depicts the interaction sequence among the components that make up this system. We achieved reliable and high-performance machine-independent interactions using pub/sub technology and the REST API, providing services to multiple users at the same time by managing sessions for each user. It plays a key role in reducing the burden on IoT devices in charge of sensors by placing the device serving as the corresponding platform physically close to the IoT. The fourth is the Kafka cluster. Kafka clusters communicate using the Kafka protocol. Like MQTT, it uses a pub/sub model and serves to stream large amounts of continuous data from sensors based on events. Finally, a web frontend visualizing the Kafka data is implemented using ThingsBoard [12], which is basically an IoT dashboard platform but was customized in this research. During data processing, we used MongoDB to store data. MongoDB is a document-based database engine that can store and retrieve unstructured data without schema as it is in JSON format, so it is very conveniently used in web-related fields. Since we initially created the data in JSON format in this project, we applied it for convenience.
The whole system consists of an IoT cloud with multiple MQTT clients and multiple nodes, as shown in Figure 2. Each client connects to the IoT cloud platform to send and receive data. The IoT cloud node is mainly composed of two components: the MQTT component and the Kafka server. The MQTT component is a broker and subscriber to the MQTT protocol. This allows for immediate data transfer as well as other operations after receiving the data. In this research, specific data is sent to Kafka after receiving data from a component for reliable data storage and transmission. The data generated by the sensor is processed by Node-MCU and sent to the MQTT broker. The broker then sends it to an IoT cloud platform with multiple subscribers. The platform internally switches from the MQTT protocol to the Kafka protocol and sends it to the Kafka cluster. After receiving data from the Kafka cluster, it forwards the received data to multiple consumers.    Figure 3 depicts the interaction sequence among the components that make up this system. We achieved reliable and high-performance machine-independent interactions using pub/sub technology and the REST API, providing services to multiple users at the same time by managing sessions for each user.  In order to make a highly scalable structure designed to respond to numerous user requests, we applied a scalable computing architecture, including elastic computing nodes, distributed storage, and load balancing, such as ALB and ELB. All peer nodes are responsible for the storage of transactions, smart contracts, and various states, and there is a tendency to waste in terms of costs such as storage space required for continuous operation.
In terms of the data acquisition approach, we tried to simulate an IoT cloud platform implemented using some IoT elements. Most IoT applications use the Raspberry Pi series because of its cheapness and powerful performance. So, we also used a Raspberry Pi to configure the IoT cloud node in this research. It is designed to lower the barrier to entry when applied in real agriculture using the Raspberry. The MLX90614 is an infrared thermometer for non-contact temperature measurements. Both the IR-sensitive thermopile detector chip and the signal conditioning ASIC are integrated [12]. MLX90614 is a low-noise In order to make a highly scalable structure designed to respond to numerous user requests, we applied a scalable computing architecture, including elastic computing nodes, distributed storage, and load balancing, such as ALB and ELB. All peer nodes are responsible for the storage of transactions, smart contracts, and various states, and there is a tendency to waste in terms of costs such as storage space required for continuous operation.
In terms of the data acquisition approach, we tried to simulate an IoT cloud platform implemented using some IoT elements. Most IoT applications use the Raspberry Pi series because of its cheapness and powerful performance. So, we also used a Raspberry Pi to configure the IoT cloud node in this research. It is designed to lower the barrier to entry when applied in real agriculture using the Raspberry. The MLX90614 is an infrared thermometer for non-contact temperature measurements. Both the IR-sensitive thermopile detector chip and the signal conditioning ASIC are integrated [12]. MLX90614 is a low-noise amplifier, 17-bit ADC, powerful DSP unit, and achieves a high accuracy and resolution of the thermometer. The address for accessing information about a certain device is shown in Table 1. Ta is the ambient temperature of the object. T OBJ1 and T OBJ2 are the temperatures of the objects. The result has a resolution of 0.02C and is available in RAM.
The temperature information obtained from the MLX90614 accumulates the information in a database and is communicated to multiple users of that temperature through an IoT cloud platform. Real-time temperature information is displayed to the web service user, and the actuator operates according to the temperature.
In practice, several types of actuators are used. The role of this actuator is assumed to be an SG90 servo motor. The SG90 is a tiny and lightweight server motor with high output power [13]. The servo motor can rotate approximately 180 degrees (90 in each direction) and works just like the standard kinds, but is smaller. Servo motors provide feedback on whether the data obtained from the sensor is being processed properly.

Melexis reserved 0x1f Yes
Each piece of hardware introduced above operates on an independent IoT device and communicates with IoT nodes using the MQTT protocol, which is relatively lightweight compared to HTTP. Each piece of hardware interacts with the IoT cloud platform and exchanges large amounts of data. Each sensor is not interconnected and operates through the platform. This means that data can be processed efficiently without unnecessary communication.

Data Processing Based on Publish and Subscribe Architecture
The MQTT protocol provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for Internet of Things messaging, such as with low-power sensors or mobile devices, such as phones, embedded computers, or microcontrollers [14]. Based on the publish/subscribe model, an MQTT broker remembers multiple components subscribed to a particular topic and forwards the data as it is received. The IoT cloud platform makes use of the MQTT and Kafka protocols. It plays a key role in reducing the burden on IoT devices. This is because the devices only serve the corresponding platform that is physically close to the IoT. Kafka clusters communicate using the Kafka protocol. Like MQTT, it uses a publish/subscribe model and serves to stream large amounts of continuous data from sensors. In this research, we provide a webbased dashboard platform for monitoring data configured using "ThingsBoard" [15,16]. The ThingsBoard is an open-source IoT dashboard [17] platform designed to store data in MongoDB [18,19].
After we collect sensor data generated from the sensor module located in AREA 1, the data are published to subscribers through the IoT cloud platform, as shown in Figure 4. The data are classified as an MQTT component by topic (classified by temperature, sunlight, rainfall, etc.).   Figure 5 shows that TOPIC-partitioned data is transmitted to the KAFKA cluster server and replicated to each broker in the cluster (each partition of the server). Every partition has one server that acts as the leader for all read/write operations within the server, and the other server acts as a follower of this leader. If a leader goes down or fails, by default, one of the followers on the other server is chosen as the new leader. Producers can generate specific messages going to selected partitions within a topic. Consumers can  Figure 5 shows that TOPIC-partitioned data is transmitted to the KAFKA cluster server and replicated to each broker in the cluster (each partition of the server). Every partition has one server that acts as the leader for all read/write operations within the server, and the other server acts as a follower of this leader. If a leader goes down or fails, by default, one of the followers on the other server is chosen as the new leader. Producers can generate specific messages going to selected partitions within a topic. Consumers can consume published messages based on topics. Messages are delivered to consumer instances within the subscribing consumer group.  Figure 5 shows that TOPIC-partitioned data is transmitted to the KAFKA cluster server and replicated to each broker in the cluster (each partition of the server). Every partition has one server that acts as the leader for all read/write operations within the server, and the other server acts as a follower of this leader. If a leader goes down or fails, by default, one of the followers on the other server is chosen as the new leader. Producers can generate specific messages going to selected partitions within a topic. Consumers can consume published messages based on topics. Messages are delivered to consumer instances within the subscribing consumer group. In Figure 6, the TOPIC-partitioned data is transmitted to the server (KAFKA Cluster) and replicated to each broker in the cluster (each partition of the server). Afterward, at the request of the consumer group, each broker in the KAFKA cluster designed a system capable of distributing data and transmitting large amounts of data. In Figure 6, the TOPIC-partitioned data is transmitted to the server (KAFKA Cluster) and replicated to each broker in the cluster (each partition of the server). Afterward, at the request of the consumer group, each broker in the KAFKA cluster designed a system capable of distributing data and transmitting large amounts of data. Data is increasingly produced at the level of the network. Therefore, it would be more efficient to also process the data at the level of the network. The IoT cloud platform eliminates bottlenecks and potential points of failure and enables rapid recovery from failures. The server in the IoT cloud performs functions for analysis and visualization of the collected time series data. In this way, the load on the server is reduced by dividing the roles according to the characteristics. It makes the server perform reliably in operation. For this reason, our platform aimed to reduce response time or latency by caching content [11]. The IoT cloud platform can be used wherever computing is used, such as location-based, Internet of Things (IoT), data caching, big data, and sensor monitoring activity spaces, mobile cloud, and others.

System Implementation
We created the dashboards for real-time data visualization and remote device control using the websocket-based framework [17]. Using our customized widgets, we established our IoT dashboards. These collect and store telemetry data in a scalable and fault-tolerant way and visualize data with built-in or custom widgets and flexible dashboards. They also define data processing rule chains, transforming and normalizing device data and Data is increasingly produced at the level of the network. Therefore, it would be more efficient to also process the data at the level of the network. The IoT cloud platform eliminates bottlenecks and potential points of failure and enables rapid recovery from failures. The server in the IoT cloud performs functions for analysis and visualization of the collected time series data. In this way, the load on the server is reduced by dividing the roles according to the characteristics. It makes the server perform reliably in operation. For this reason, our platform aimed to reduce response time or latency by caching content [11]. The IoT cloud platform can be used wherever computing is used, such as location-based, Internet of Things (IoT), data caching, big data, and sensor monitoring activity spaces, mobile cloud, and others.

System Implementation
We created the dashboards for real-time data visualization and remote device control using the websocket-based framework [17]. Using our customized widgets, we established our IoT dashboards. These collect and store telemetry data in a scalable and fault-tolerant way and visualize data with built-in or custom widgets and flexible dashboards. They also define data processing rule chains, transforming and normalizing device data and raising alarms on incoming telemetry events, attribute updates, device inactivity, and user actions. Figure 7 represents various sensors and actuators that were used in this research, and the figure below shows how data are transmitted among many system components in this system. lected time series data. In this way, the load on the server is reduced by dividing the roles according to the characteristics. It makes the server perform reliably in operation. For this reason, our platform aimed to reduce response time or latency by caching content [11]. The IoT cloud platform can be used wherever computing is used, such as location-based, Internet of Things (IoT), data caching, big data, and sensor monitoring activity spaces, mobile cloud, and others.

System Implementation
We created the dashboards for real-time data visualization and remote device control using the websocket-based framework [17]. Using our customized widgets, we established our IoT dashboards. These collect and store telemetry data in a scalable and fault-tolerant way and visualize data with built-in or custom widgets and flexible dashboards. They also define data processing rule chains, transforming and normalizing device data and raising alarms on incoming telemetry events, attribute updates, device inactivity, and user actions. Figure 7 represents various sensors and actuators that were used in this research, and the figure below shows how data are transmitted among many system components in this system.  Table 2 shows the versions of the used modules in this research. The runtime in which the server runs is composed of Node.js based on the JavaScript language, and a package suitable for Node.js is configured so that the server can work well.  Table 2 shows the versions of the used modules in this research. The runtime in which the server runs is composed of Node.js based on the JavaScript language, and a package suitable for Node.js is configured so that the server can work well. When a sensor publishes data on a specific topic, the MQTT component receives it and classifies it into direct processing data and data processing through Kafka. After an MQTT client establishes a connection to an MQTT broker, it is set up to send sensor data connected to that IoT node every 100 ms. By maintaining the established connection between the client and the broker, the burden on the expensive part of the network connection is reduced. Figure 8 shows the actual payload data transmitted through the publish and subscribe architecture in this work. and classifies it into direct processing data and data processing through Kafka. After an MQTT client establishes a connection to an MQTT broker, it is set up to send sensor data connected to that IoT node every 100 ms. By maintaining the established connection between the client and the broker, the burden on the expensive part of the network connection is reduced. Figure 8 shows the actual payload data transmitted through the publish and subscribe architecture in this work. Many sensor data take constant values, except under special circumstances where it exhibits unusual values. In this case, it is important to reliably transfer the desired data When a sensor publishes data on a specific topic, the MQTT component receives it and classifies it into direct processing data and data processing through Kafka. After an MQTT client establishes a connection to an MQTT broker, it is set up to send sensor data connected to that IoT node every 100 ms. By maintaining the established connection between the client and the broker, the burden on the expensive part of the network connection is reduced. Figure 8 shows the actual payload data transmitted through the publish and subscribe architecture in this work. Many sensor data take constant values, except under special circumstances where it exhibits unusual values. In this case, it is important to reliably transfer the desired data " means "device" in English).
Many sensor data take constant values, except under special circumstances where it exhibits unusual values. In this case, it is important to reliably transfer the desired data between successive sets of data. Kafka within our platform does this. In Kafka, the received data is shared on the IoT cloud platform, and the data is shared with multiple consumers who consume the data. The server processing data is sent to different services.
When data is sent through Kafka, consumers of such as databases and web servers consume the data immediately and proceed as follows: After receiving data from the Kafka cluster, data is accumulated through the MongoDB connector. The query result in Mon-goDB that processes the data received from the Kafka cluster is as follows. The left-side of Figure 8 shows the temperature and humidity values printed at every datapoint received, and the right-side of Figure 9 represents our user interface screen, which depicts the temperature and humidity values graphically.
We also store the value in a database management platform, especially MongoDB. MongoDB is a cross-platform document-oriented database system. Classified as a NoSQL database, MongoDB avoids the use of traditional table-based relational database structures in favor of JSON-like, dynamic schema-type documents. This makes data integration for specific kinds of applications easier and faster. Since we make use of the Node.js platform, using communication with a JSON-based DB for development is more efficient. It is easy to store data by utilizing these document-oriented JSON. Our platform visualizes the data through the graph tool on the web using ThingsBoard. The dashboard shows the status of the IoT, which is being checked instantly. Figures 10 and 11 are the dashboard imple-mentations in this research. They show a time-series graph according to the access time. Location information can also be managed as longitude and latitude values and displayed on a map based on these values. The criteria for the alarm function can be set by the user, so if the criteria are out of range, an alarm is automatically displayed on the dashboard. They can also operate connected actuators via the RPC API provided by ThingsBoard.
between successive sets of data. Kafka within our platform does this. In Kafka, the received data is shared on the IoT cloud platform, and the data is shared with multiple consumers who consume the data. The server processing data is sent to different services.
When data is sent through Kafka, consumers of such as databases and web servers consume the data immediately and proceed as follows: After receiving data from the Kafka cluster, data is accumulated through the MongoDB connector. The query result in MongoDB that processes the data received from the Kafka cluster is as follows. The leftside of Figure 8 shows the temperature and humidity values printed at every datapoint received, and the right-side of Figure 9 represents our user interface screen, which depicts the temperature and humidity values graphically. We also store the value in a database management platform, especially MongoDB. MongoDB is a cross-platform document-oriented database system. Classified as a NoSQL database, MongoDB avoids the use of traditional table-based relational database structures in favor of JSON-like, dynamic schema-type documents. This makes data integration for specific kinds of applications easier and faster. Since we make use of the Node.js platform, using communication with a JSON-based DB for development is more efficient. It is easy to store data by utilizing these document-oriented JSON. Our platform visualizes the data through the graph tool on the web using ThingsBoard. The dashboard shows the status of the IoT, which is being checked instantly. Figures 10 and 11 are the dashboard implementations in this research. They show a time-series graph according to the access time. Location information can also be managed as longitude and latitude values and displayed on a map based on these values. The criteria for the alarm function can be set by the user, so if the criteria are out of range, an alarm is automatically displayed on the dashboard. They can also operate connected actuators via the RPC API provided by ThingsBoard.  The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur. The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur. The upper left of Figure 11 provides the name of each sensor node registered in our system and the information collected from that node in real time. It also provides the latitude and longitude of where the node is installed. The right side of Figure 11 shows the location where the sensor node is installed on the map. Therefore, if the IoT facility that " means "average" in English).
The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur.
The upper left of Figure 11 provides the name of each sensor node registered in our system and the information collected from that node in real time. It also provides the latitude and longitude of where the node is installed. The right side of Figure 11 shows the location where the sensor node is installed on the map. Therefore, if the IoT facility that the manager wants to monitor is distributed over multiple regions, it is easy to visually check which region the data is coming from.
The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur. Figure 11. Second dashboard screen captured. (For your information, in this figure, the non-English term "장치" and the statement "아무 알람도 없습니다" means "device" and "there are no alarms" in English, respectively).
The upper left of Figure 11 provides the name of each sensor node registered in our system and the information collected from that node in real time. It also provides the latitude and longitude of where the node is installed. The right side of Figure 11 shows the location where the sensor node is installed on the map. Therefore, if the IoT facility that The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur. Figure 11. Second dashboard screen captured. (For your information, in this figure, the non-English term "장치" and the statement "아무 알람도 없습니다" means "device" and "there are no alarms" in English, respectively).
The upper left of Figure 11 provides the name of each sensor node registered in our system and the information collected from that node in real time. It also provides the latitude and longitude of where the node is installed. The right side of Figure 11 shows the location where the sensor node is installed on the map. Therefore, if the IoT facility that " and the statement " The upper left knob in Figure 10 is used by the administrator to adjust the temperature of the IoT facility. When you turn this knob on the dashboard, the temperature set value is transmitted to the IoT node to drive a heater or fan. The power switch is used to turn on/off the operation of this system. The upper right corner of Figure 10 is a screen showing the time series of measured values in the system as a line graph with respect to temperature and wattage. The lower left of Figure 10 is a screen that provides the user with information on major events/alarms that occur. Figure 11. Second dashboard screen captured. (For your information, in this figure, the non-English term "장치" and the statement "아무 알람도 없습니다" means "device" and "there are no alarms" in English, respectively).
The upper left of Figure 11 provides the name of each sensor node registered in our system and the information collected from that node in real time. It also provides the latitude and longitude of where the node is installed. The right side of Figure 11 shows the location where the sensor node is installed on the map. Therefore, if the IoT facility that

Performance Analysis
In this paper, we conducted a performance evaluation for our high-performance IoT cloud computing framework. In this section, we conducted an evaluation of the following four items: concurrent client connections per server, pub/sub data transmission order guarantee, pub/sub data processing performance, and temperature/humidity measurement information analysis performance. Figure 12 shows the overall system architecture for performance evaluation.

11009
11 of 23 the manager wants to monitor is distributed over multiple regions, it is easy to visually check which region the data is coming from.

Performance Analysis
In this paper, we conducted a performance evaluation for our high-performance IoT cloud computing framework. In this section, we conducted an evaluation of the following four items: concurrent client connections per server, pub/sub data transmission order guarantee, pub/sub data processing performance, and temperature/humidity measurement information analysis performance. Figure 12 shows the overall system architecture for performance evaluation.  Table 3, we used Apache JMeter version 5.4.1. Apache JMeter is an open-source Java application designed to load functional behavior and measure performance. It provides extended functionality, from its original purpose of testing web applications to other testing capabilities. Plugins supporting various protocols have additionally been configured to use the IoT cloud computing platform. In the case of the Kafka client, the consumer creation function was insufficient, so it was additionally configured using the JSR223  Table 3, we used Apache JMeter version 5.4.1. Apache JMeter is an open-source Java application designed to load functional behavior and measure performance. It provides extended functionality, from its original purpose of testing web applications to other testing capabilities. Plugins supporting various protocols have additionally been config-ured to use the IoT cloud computing platform. In the case of the Kafka client, the consumer creation function was insufficient, so it was additionally configured using the JSR223 script. In Kafka, multiple partitions can be configured in one topic to improve performance through distributed processing. Kafka can have multiple producers on a topic, and multiple consumers can subscribe to it. Table 3. System components for evaluation.

Type Purpose
MQTT server Server for processing sensor data Kafka server cluster Cluster server for image data processing and Kafka server stability Apache JMeter Create a virtual client for testing on the MQTT Kafka server

Response Time
After the request is sent from the smartphone, the time until the server completes processing and returns a response was measured. To do this, we repeated the same experiment 10 times and calculated the average. The system design and experiments to check how many client requests could be processed simultaneously by the server application were as follows.
(1) After running Postman, send a POST request to obtain a JWT token using Table 4, as shown in Figure 13.

Content-Type application/json
Accept application/json Body { "username":"xxxxxx@thingsboard.org", "password":"xxxxxx" } (2) Run JMeter as an administrator after acquiring JWT token (3) File -> Open and load the test data jmx file (4) Input variables corresponding to user defined variables using Table 5   (2) Run JMeter as an administrator after acquiring JWT token (3) File -> Open and load the test data jmx file (4) Input variables corresponding to user defined variables using Table 5  24b14a40-7ff0-11ec-88a7-2d9d3861528f scope ANY (5) Press Ctrl + R to run the performance evaluation (6) Repeat No. 5 ten times with Table 6 parameters, measure the time until a response arrives ten times, and take the average value to calculate the processing time in ms.  Table 6 shows a comparison of the average response time described above, according to the execution time. We can see that the response times for MQTT disconnects and publishes were more or less than 100 ms. Since MQTT and Kafka are both TCP-based protocols, the initial response time was a little bit higher due to the initial connection setup of socket communication. After that, a difference of about 400 ms continuously occurred during data transmission and reception. Based on this, the MQTT Kafka platform implemented in this research can be considered effective for use in environments where network bandwidth is limited or a large amount of data is continuously transmitted and received.
In Figures 14 and 15, we can see the overall information of the performance evaluation performed by JMeter. The figure provides the number of responses, average value, minimum value, maximum value, standard deviation, error rate, bandwidth, received data size, transmitted data size, and average data size.

1009
13 of 23  Table 6 shows a comparison of the average response time described above, according to the execution time. We can see that the response times for MQTT disconnects and publishes were more or less than 100 ms. Since MQTT and Kafka are both TCP-based protocols, the initial response time was a little bit higher due to the initial connection setup of socket communication. After that, a difference of about 400 ms continuously occurred during data transmission and reception. Based on this, the MQTT Kafka platform implemented in this research can be considered effective for use in environments where network bandwidth is limited or a large amount of data is continuously transmitted and received.
In Figures 14 and 15, we can see the overall information of the performance evaluation performed by JMeter. The figure provides the number of responses, average value, minimum value, maximum value, standard deviation, error rate, bandwidth, received data size, transmitted data size, and average data size.

Concurrent Client Connections per Server
We carried out a performance evaluation of how many client requests a server application can handle simultaneously. In order to check whether requests generated from numerous IoT devices can be simultaneously processed, the number of requests that could be connected to one server at the same time was measured. To this end, we evaluated the number of connectable clients per second using a certified benchmark simulation tool. At that time, we checked whether 50 or more clients could process more than 10,000 requests in 1 min by making 200 requests each at the same time, thereby evaluating whether the server could handle more than 10,000 requests per minute.

Concurrent Client Connections per Server
We carried out a performance evaluation of how many client requests a server application can handle simultaneously. In order to check whether requests generated from numerous IoT devices can be simultaneously processed, the number of requests that could be connected to one server at the same time was measured. To this end, we evaluated the number of connectable clients per second using a certified benchmark simulation tool. At that time, we checked whether 50 or more clients could process more than 10,000 requests in 1 min by making 200 requests each at the same time, thereby evaluating whether the server could handle more than 10,000 requests per minute.
(2) Add mqtt-xmeter-2.0.2-jar-with-dependencies.jar and jmeter-plugins-graphs-basic-2.0.jar libraries to JMeter for test evaluation of the MQTT protocol. (3) Set the following variables in the user-defined variables of Apache JMeter.  (4) for 1 min and judge the result by the average. We obtained the result shown in Table 8.

Concurrent Client Connections per Server
We carried out a performance evaluation of how many client requests a server application can handle simultaneously. In order to check whether requests generated from numerous IoT devices can be simultaneously processed, the number of requests that could be connected to one server at the same time was measured. To this end, we evaluated the number of connectable clients per second using a certified benchmark simulation tool. At that time, we checked whether 50 or more clients could process more than 10,000 requests in 1 min by making 200 requests each at the same time, thereby evaluating whether the server could handle more than 10,000 requests per minute.
(1) Run Apache JMeter using Table 7.   Table 8.  Figure 16 shows the processing performance per second of the system built in this study. We assumed a scenario in which 50 clients issuing 200 messages were run concurrently. Each client operated in the following order: MQTT connection, 200 messages issued, and MQTT connection termination.
Appl. Sci. 2022, 12, 11009 15 of 23  Figure 16 shows the processing performance per second of the system built in this study. We assumed a scenario in which 50 clients issuing 200 messages were run concurrently. Each client operated in the following order: MQTT connection, 200 messages issued, and MQTT connection termination. We established pub/sub clients for performance analysis in the following environment. We conducted the evaluation as in Figure 17 to assume simultaneous connection of MQTT and Kafka clients. Assuming one IoT client, 200 iteration evaluations per thread were performed. We did one client connection and termination for each thread, 50 MQTT publishes, and one consumer creation and termination. Plugins and extensions were required to handle MQTT and Kafka clients in JMeter. For MQTT, connect, terminate, and publish were provided independently. However, for Kafka, this comes with a producer and consumer pair with integrated connection and termination capabilities. In the case of the consumer, to use the necessary functions, a script had to be created using JSR223 for script support in Java. Therefore, considering this environment, in the case of MQTT, connect, publish, and terminate were recognized as one work process and compared with the Kafka consumer.

Server-Client Data Processing Performance
In our pub/sub messaging structure, we evaluated the data processing performance per second for one topic. In the pub/sub messaging structure, the data processing performance per second for one topic was evaluated to determine whether processing of more than 58,000 datapoints per second was possible. In order to evaluate the data processing performance per second for one topic in the MQTT KAFKA messaging structure, an experiment was conducted for MQTT KAFKA data processing performance verification. We established pub/sub clients for performance analysis in the following environment. We conducted the evaluation as in Figure 17 to assume simultaneous connection of MQTT and Kafka clients. Assuming one IoT client, 200 iteration evaluations per thread were performed. We did one client connection and termination for each thread, 50 MQTT publishes, and one consumer creation and termination. Plugins and extensions were required to handle MQTT and Kafka clients in JMeter. For MQTT, connect, terminate, and publish were provided independently. However, for Kafka, this comes with a producer and consumer pair with integrated connection and termination capabilities. In the case of the consumer, to use the necessary functions, a script had to be created using JSR223 for script support in Java. Therefore, considering this environment, in the case of MQTT, connect, publish, and terminate were recognized as one work process and compared with the Kafka consumer.
Appl. Sci. 2022, 12, 11009 15 of 23  Figure 16 shows the processing performance per second of the system built in this study. We assumed a scenario in which 50 clients issuing 200 messages were run concurrently. Each client operated in the following order: MQTT connection, 200 messages issued, and MQTT connection termination. We established pub/sub clients for performance analysis in the following environment. We conducted the evaluation as in Figure 17 to assume simultaneous connection of MQTT and Kafka clients. Assuming one IoT client, 200 iteration evaluations per thread were performed. We did one client connection and termination for each thread, 50 MQTT publishes, and one consumer creation and termination. Plugins and extensions were required to handle MQTT and Kafka clients in JMeter. For MQTT, connect, terminate, and publish were provided independently. However, for Kafka, this comes with a producer and consumer pair with integrated connection and termination capabilities. In the case of the consumer, to use the necessary functions, a script had to be created using JSR223 for script support in Java. Therefore, considering this environment, in the case of MQTT, connect, publish, and terminate were recognized as one work process and compared with the Kafka consumer.

Server-Client Data Processing Performance
In our pub/sub messaging structure, we evaluated the data processing performance per second for one topic. In the pub/sub messaging structure, the data processing performance per second for one topic was evaluated to determine whether processing of more than 58,000 datapoints per second was possible. In order to evaluate the data processing performance per second for one topic in the MQTT KAFKA messaging structure, an experiment was conducted for MQTT KAFKA data processing performance verification.

Server-Client Data Processing Performance
In our pub/sub messaging structure, we evaluated the data processing performance per second for one topic. In the pub/sub messaging structure, the data processing performance per second for one topic was evaluated to determine whether processing of more than 58,000 datapoints per second was possible. In order to evaluate the data processing performance per second for one topic in the MQTT KAFKA messaging structure, an experiment was conducted for MQTT KAFKA data processing performance verification.
We conducted the test several times by varying the number of client connection patterns and topics. As the time increased, the number of transactions processed was continuously maintained at over 100,000 messages per second. It was suitable for processing and transmitting measurement data occurring continuously in the real environment. As a result of a total of 10 repeated experiments, a minimum of 107,369.2 datapoints could be processed per second, and a maximum of 117,603.6 datapoints per second could be processed. Figure 6 shows part of the experimental result log.
(1) Connect to the MQTT KAFKA configuration server.
(2) Enter the following command in Table 9 to create a topic. (3) Input the test code in Table 10 and check the output result. (4) Check the data processing performance per second through the log output from the performance evaluation program. Table 11 and Figure 18 outline the experiment to evaluate the transmission rate according to the data size. In this experiment, performance evaluation was performed with a total of 10 million records. The maximum performance was achieved at 117,693.6 record/ms. When transmitting data, it is desirable to divide data into predetermined sized data with the maximum data rate and transmit them when transmitting large data on the MQTT Kafka platform. We conducted the test several times by varying the number of client connection patterns and topics. As the time increased, the number of transactions processed was continuously maintained at over 100,000 messages per second. It was suitable for processing and transmitting measurement data occurring continuously in the real environment. As a result of a total of 10 repeated experiments, a minimum of 107,369.2 datapoints could be processed per second, and a maximum of 117,603.6 datapoints per second could be processed. Figure 6 shows part of the experimental result log.
(1) Connect to the MQTT KAFKA configuration server.
(2) Enter the following command in Table 9 to create a topic.   (4) Check the data processing performance per second through the log output from the performance evaluation program. Table 11 and Figure 18 outline the experiment to evaluate the transmission rate according to the data size. In this experiment, performance evaluation was performed with a total of 10 million records. The maximum performance was achieved at 117,693.6 record/ms. When transmitting data, it is desirable to divide data into predetermined sized data with the maximum data rate and transmit them when transmitting large data on the MQTT Kafka platform. Figure 18. Screenshot of data processing log for performance evaluation. Figure 18. Screenshot of data processing log for performance evaluation.
Since two or more partitions were configured in parallel on different brokers to distribute the load of requests, the performance can be naturally improved. The test was conducted by varying the number of messages and message size options to be transmitted to topics with one to three partitions. Table 12 shows the results of experiments to measure the message processing performance of this system. Table 12 shows information on the number of messages processed according to a change in the size of a transmitted message and the amount of message processing per unit time.

Actual Data Acquisition Performance Measurement
In order to verify that the system built through the previous experiments can operate normally even with actual sensor data, an information transfer experiment was conducted within 1500 ms to the server through the GPIO input of the temperature and humidity sensors. As a result of the experiment, a total of 10 repeated experiments were conducted, as shown in Table 12. Information transmission was completed in a minimum of 196 ms and a maximum of 299 ms. The average value of 10 experiments was 227.92 ms, as in Table 12 and Figure 19. val randomString = RandomStringUtils.randomAlphanumeric(2940); val randomStringLength = randomString.length + 60 for (i in 1..100) { val producerRecord : ProducerRecord<String, String> = ProducerRecord("throughput-test1", "key", "Consume data from kafka : Order = $i, Size = $randomStringLength, Message = $randomString") val future: Future<RecordMetadata> = producer.send(producerRecord)!! val result = future.get() println("Produce data from kafka : Offset =" + result.offset() + ", Size =" + result.serializedValueSize() + " bytes") } producer.flush() producer.close() } } Figure 19. Source code of performance evaluation for producer.

In-Order Data Transmission
In the data transmission order guarantee experiment to verify the stability maintenance between data transmissions, about 3000 bytes of data were transmitted using the MQTT protocol, and the loss and order guarantee of the transmitted data were observed. At this time, in order to evaluate the data transmission guaranteed performance, the following performance evaluation calculation method was applied. Figure 19. Source code of performance evaluation for producer.

In-Order Data Transmission
In the data transmission order guarantee experiment to verify the stability maintenance between data transmissions, about 3000 bytes of data were transmitted using the MQTT protocol, and the loss and order guarantee of the transmitted data were observed. At this time, in order to evaluate the data transmission guaranteed performance, the following performance evaluation calculation method was applied.
(1) Run IntelliJ IDEA Community Edition and add Spring-Kafka and Apache's Kafkaclients library to evaluate whether the data transmission order is guaranteed when transmitting 3000 bytes of data for a specific topic or single partition. (2) The Kafka producer program source code is in Figure 19.   (100000) The example program for this performance evaluation was developed using Spri Boot. Therefore, since this program should be executed through the spring server, we e ecuted the main program in Figure 21 by creating objects for producer and consumer cla ses to run the server. Later, when a user request arrives at the server, the server dispatch it and executes it.
(4) Run src -> main -> kotlin -> com.example.kafkatoy-> kafkaToyApplication.kt file. The example program for this performance evaluation was developed using Spring Boot. Therefore, since this program should be executed through the spring server, we executed the main program in Figure 21 by creating objects for producer and consumer classes to run the server. Later, when a user request arrives at the server, the server dispatches it and executes it. (5) Run Postman, enter the address to send the GET request to as Figure 22, and send. As a result of the experiment, the data transmission order according to all data trans mission times was maintained correctly for more than 13 million requests. We confirmed the following as a result of the test. Our system 100% satisfied the data transfer order of 3000 bytes for a specific topic single partition. That is, for 100% of the transmitted data we confirmed that all data transmission orders were guaranteed without any loss. Table  13 shows part of the actual data transfer log from lines 13,671,130 to 13,671,229, confirming that the data transfer order was guaranteed. (4) Run src -> main -> kotlin -> com.example.kafkatoy-> kafkaToyApplication.kt file. (5) Run Postman, enter the address to send the GET request to as Figure 22, and send. As a result of the experiment, the data transmission order according to all data trans mission times was maintained correctly for more than 13 million requests. We confirmed the following as a result of the test. Our system 100% satisfied the data transfer order of 3000 bytes for a specific topic single partition. That is, for 100% of the transmitted data we confirmed that all data transmission orders were guaranteed without any loss. Table  13 shows part of the actual data transfer log from lines 13,671,130 to 13,671,229, confirming that the data transfer order was guaranteed. As a result of the experiment, the data transmission order according to all data transmission times was maintained correctly for more than 13 million requests. We confirmed the following as a result of the test. Our system 100% satisfied the data transfer order of 3000 bytes for a specific topic single partition. That is, for 100% of the transmitted data, we confirmed that all data transmission orders were guaranteed without any loss. Table 13 shows part of the actual data transfer log from lines 13,671,130 to 13,671,229, confirming that the data transfer order was guaranteed.

Conclusions
Recently, research and development on indoor smart farm facilities has become popular. To develop the IoT facility, we designed and implemented an IoT cloud platform using a publish and subscribe architecture. As a result of the experiment, the average response time for user requests was measured to be within 100 ms on average, and 64,313 requests per second were performed for requests that occurred simultaneously from multiple clients. In addition, in the data transmission order guarantee verification experiment to verify the safety maintenance between data transmissions, the order was guaranteed without loss of information, even for more than 13 million requests. Finally, using real sensor data, information transmission was completed stably within an average of 227 ms. These results showed superior performance, when compared with previous studies [12,13], of the MQTT protocol for processing large amounts of data. Of course, it is difficult to make an absolute comparison because the server environment and network environment were not the same, but it is meaningful in that we exceeded the limits of data processing speed and throughput of previous studies. Through this study, it was possible to improve the processing speed of large-capacity data and ensure the stability of transmission orders in the MQTT protocol-based system. We conducted research on whether it guarantees safety and reliability.
√ We realized a high-performance IoT cloud platform architecture which is for data interworking between each node, and this system also provides the ability to record key facts. √ As a result of performance evaluation, our system is effective for use in environments where network bandwidth is limited or a large amount of data is continuously transmitted and received.
As a result, the pub/sub platform implemented in this research is to maintain and verify the data collected from the planting and harvesting phases in a safe and secure manner.