Algorithms for Optimizing Energy Consumption for Fermentation Processes in Biogas Production

: Problems related to reducing energy consumption constitute an important basis for scien-tiﬁc research worldwide. A proposal to use various renewable energy sources, including creating a biogas plant, is emphasized in the introduction of this article. However, the indicated solutions require continuous monitoring and control to maximise the installations’ effectiveness. The authors took up the challenge of developing a computer solution to reduce the costs of maintaining technological process monitoring systems. Concept diagrams of a metrological system using multi-sensor techniques containing humidity, temperature and pressure sensors coupled with Electrical Impedance Tomography (EIT) sensors were presented. This approach allows for effective monitoring of the anaerobic fermentation process. The possibility of reducing the energy consumed during installation operation was proposed, which resulted in the development of algorithms for determining alarm states, which are the basis for controlling the frequency of technological process measurements. Implementing the idea required the preparation of measurement infrastructure and an analytical engine based on AI techniques, including an expert system and developed algorithms. Numerous time-consuming studies and experiments have conﬁrmed reduced energy consumption, which can be successfully used in biogas production.


Introduction
One of the very important topics of recent decades has been energy policy, where great emphasis is placed on using biogas plants as a renewable energy source.Diverse methods are used to produce biogas.Most often, the literature reports methane extraction from wheat or municipal sewage.
The literature indicates that there are over 300 biogas plants in Poland.Some can be adapted to produce biomethane using tools enabling biogas production.In the near future, companies such as PKN Orlen and PGNiG are planning to create a typical biomethane plant.Smog has been growing problem in Poland for many years.It is the result of the use of heating technologies based on burning wood or coal.These energy sources are also often used for food preparation.Therefore, the aim of many scientific works is to replace harmful sources with modern solutions that reduce the emission of harmful substances into the atmosphere.The proposal is to use biomethane which is injected into transmission systems and thus delivered to individual consumers.This technology can also be used in the public sector, for example in garbage trucks, passenger cars or buses, which will have a measurable impact on the economy and the environment [1].Reactants based on anaerobic processes open up various industrial and municipal applications.For example, they can be used for sludge or effluent management in wastewater treatment plants.This ensures low investment and operating costs for biodegradation of organic compounds.The advantage is the limited amount of excess anaerobic sludge produced and the small footprint of the plant and associated equipment.The strength of digestion technologies is the ability to produce and capture biogas with high methane content [2].It has been shown that it is possible to obtain relatively high energy from biogas, with 1 m 3 of the plant allowing 9.4 kWh to be obtained, which is equivalent to using 1.25 kg of coal or one liter of diesel oil [1].The process of obtaining methane is more effective when using materials with a significant proportion of organic matter.Anaerobic fermentation processes are widely used, and sewage sludge and municipal wastewater are used as feedstock [1].Unfortunately, there are challenges with the identified processes.The pretreatment of the obtained material and the removal of biogenic compounds are problematic.It is also necessary to develop systems to maintain stable conditions inside the reactors, with particular emphasis on maintaining a high process temperature [2].Piechota's article describes the biogas production process in detail [1].
Industrial tank reactors play an important role in the technological processes of biogas production [3].An industrial reactor is a container for chemical and physical reactions.The purpose of using industrial tank reactors is to ensure optimum economic parameters for the processes.This is made possible by the optimal design of the reactor and the skilful overlapping of the three sub-processes that take place inside the reactor; mainly mass, momentum and heat transfer.Process control can be based on the dynamic selection of many key parameters.The experiment described here is applicable to reactors with solid-liquid and gas-liquid interactions [4].
In terms of control, monitoring is a very important part of the system.There are two main reasons for monitoring the states of dynamic systems.The first is to detect impending failures, which include damage to the process infrastructure, excessive deviations in key process parameters or interruptions in process continuity.Secondly, an effective monitoring system aims to detect a problem early enough to take practical corrective action [5][6][7].
The second reason for using industrial process monitoring is the need to control the industrial process [8].This is essential to ensure high quality.Effective monitoring methods must be used to control multi-phase processes, including chemicals that can dynamically change their aggregation state.Given the aggressive conditions under which reactions occur in the reactor, this is a challenging task.Problems with the use of intrusive sensors include the inability to inspect any part of the reactor interior directly, the accuracy of the measurements taken, the need for multiple monitoring systems simultaneously, and the high uncertainty of determining the dynamic state of the process from incomplete data (indirect method) [5,9,10].
Current non-invasive monitoring techniques in industrial processes do not fully meet operational requirements.The resulting images of the phenomena and processes being monitored are often blurred, confusing and difficult to interpret and are full of errors in terms of the number of artefacts (crystals or gas bubbles) known to be present in the reactor and their size and location.As a result, redundant systems are used to gather accurate information about the state of the process being monitored, dramatically increasing operating costs [11].
The challenges and drawbacks of the monitoring methods for chemical tank reactors described above make it necessary to modify them.Implementing a better monitoring strategy will increase the reliability of the activities inside the reactors while reducing the operating costs of industrial systems.
In recent years, much work has been done to develop solutions to increase biogas production efficiency, such as using the Fluidised Active Filling Reactor method [12] or introducing computer-based approaches such as artificial intelligence algorithms to improve fermentation control processes [13].For example, Temporal Fusion Transformers were used in the Sappl research to estimate the biogas production rate [14].In other publications, the authors use Artificial Neural Networks and Genetic algorithms to optimise biogas production [15].Standard parameters such as organic loading rate, temperature and pH are measured to monitor biogas production [16].
The research described in this article concerns the development of chemical process monitoring systems, dealing with two-phase processes.These include processes based on gas-liquid and liquid-solid reactions.The basis for biogas production is the use of tanks called reactors.Due to presence of many phases, we are dealing with heterogeneous reservoirs.In order to increase the efficiency of the developed system, numerous types of sensors are used.Some of these allow measurements to be made in a non-invasive way.These include electrical process tomography sensors.An innovative element is the use of a decision tree to classify system states as well as the use of other artificial intelligence algorithms.Each measurement case requires an individual selection of parameters and methods, while the developed solution allows the automatic selection of algorithms.This is reflected in the fast operation of the created tools and in the reduction of the impact of measurement disturbances.The solution also makes it possible to reduce operating costs.In this way, the platform automatically reacts to changes in the reactor and chooses how to replace measurements.An original, intelligent system for effective control of chemical reactions using process electrical tomography is described in subsequent chapters.

Pressure Sensors
The MPL3115A2 atmospheric pressure sensor is a digital sensor (it has a built-in analogue-to-digital converter).The signal received from the sensor is handled by the TWI bus, through which the microcontroller communicates with the sensor.The received signal is processed and interpreted accordingly.The sensor has a 20-bit ADC that measures the pressure.The results are then converted.The atmospheric pressure is measured in tenths of hPa.
Changes in the altitude at which the sensor was located did not cause any unexpected changes in the measurements.The sensor was operated for several days in an environment with varying conditions, where the accuracy of the readings received was verified against the actual prevailing atmospheric pressure.The sensor did not show any variations of more than 0.1 hPa in the measurements taken.
The sensor was configured to take one measurement per second, which was considered a perfectly adequate measurement frequency.In this mode, the sensor consumes 8.5 µA.It is planned to implement a standby mode during the intervals between measurements.
The data acquisition time of the sensor depends on the single data conversion time performed by the ADC fitted to the MPL3115 sensor and the i2c bus data transmission time.The execution time of a single measurement by the ADC should be estimated to be <30 µs.The pressure measurement data occupies 20 bits.Based on an i2c bus speed of 100 kHz, it can be determined that the time required to send the data will be ~200 µs.Therefore, the data acquisition time can be specified as <230 µs.
The data rate is the frequency at which the sensor performs the measurement.This parameter is configurable.The sensor is set to send one measurement per second in this instrument.

Temperature and Humidity Sensors
A 16-bit analogue-to-digital converter fitted to the sensor measures the relative humidity and temperature readings.The sensor sends a signal to the NRF microcontroller via the TWI bus containing 16 data bits, including relative humidity and temperature measurements.The received data was processed so that the temperature was transmitted with an accuracy of decimal parts, while the relative humidity was expressed as a percentage value with an accuracy of 1%.
The HTS221 sensor measurements were taken under different conditions.This was done to test whether this would affect the measurements taken by the sensor.After analyzing the results from the sensors, the measurement stability was +/− 1%.The sensor read the measured ambient air temperature at a high level.The sensor error can be determined to 0.1 • C.
The HTS221 sensor was configured in a measurement mode with a measurement frequency of 1/s.The sensor measures once per second.In future developments, it is planned to implement a sleep mode for the sensor to save energy.The sensor consumes a current of ~6 µA in its current configuration.Ultimately, after the sensor sleep mode is implemented, the current consumption should be ~0.5 µA.
The data acquisition time of the sensor depends on the single data conversion time performed by the ADC converter fitted to the HTS221 sensor and the i2c bus data transmission time.The time taken by the ADC to perform a single measurement should be estimated to be <30 µs.The sensor measures two physical quantities (humidity and temperature), so the time will be <60 µs (no exact data available in the catalogue note).The data frame sent by the sensor over the i2c Mag bus occupies 32 bits.Based on an i2c bus speed of 100 kHz, it can be determined that the time required to send data will be ~320 µs.Therefore, the data acquisition time can be determined to be <380 µs.
As the humidity and air temperature information does not need to be sent as frequently as other sensors, it is proposed to use variable measurement frequency.
Currently, sensors used in a laboratory setup are configured to transmit their readings at both high and low frequencies once every five minutes.This interval range has been set because data such as humidity, pressure and temperature do not change dynamically and are only slow-changing signals.Assuming a data transmission interval of one data transmission every 5 min, and based on the capacity of the data frame transmitted according to this interval, the speed of the transmitted data should be determined to be 40 b/s.
The device was operated continuously for several tens of hours.Irrespective of external factors such as vibration, temperature or humidity variations, the device adapted its measurements to changes in temperature and humidity.The measurements were also unaffected by dust.Data transmission showed no sensitivity to external stimuli.When the receiver scans the transmitting devices, the transmitting device always sends the current frame with the latest measurement data.
An energy-saving mode is planned for this device.The current power consumption of the device is shown in the Figure 1.Note that the type of power supply affects the curve.Here, the power supply is battery-based, so it depends on the current capacity of the cell.This value will decrease over time (Figure 1).

Expert System
The solution developed as part of the research is embedded in a complex platform belonging to the group of expert systems.Expert models allow for the simulation of the actions and decisions of organisations, people or machines using advanced statistical algorithms, often harnessed in the form of artificial intelligence algorithms.These models are typically used to augment, rather than replace, human experts [5].These models can improve their performance by collecting data like humans' collective experience.Knowledge bases are the basis for working with expert systems.They allow for the col-

Expert System
The solution developed as part of the research is embedded in a complex platform belonging to the group of expert systems.Expert models allow for the simulation of the Energies 2023, 16, 7972 5 of 17 actions and decisions of organisations, people or machines using advanced statistical algorithms, often harnessed in the form of artificial intelligence algorithms.These models are typically used to augment, rather than replace, human experts [5].These models can improve their performance by collecting data like humans' collective experience.Knowledge bases are the basis for working with expert systems.They allow for the collection of information from real processes to be collected in the form of rules.An application using an expert system uses both an inference engine and a set of developed rules.This makes it possible to simulate real processes in a virtual environment [5].
Topics related to expert models have recently become an important part of artificial intelligence.This is because it has been found that the tools used for data analysis are inadequate.Search techniques and computational logic have been used to solve various problems.Such an approach, as mentioned in [14] was definitely insufficient.It was limited to relatively simple problems or to complex problems in computer games.An important element affecting the effectiveness of algorithms is the number of parameters.These criteria force the use of more advanced knowledge-based tools.In a situation where the analysis space grows exponentially in terms of parameters, it is necessary to use solutions based on expert systems.
Rules used in expert systems to represent domain knowledge are usually obtained in the generation process.During data analysis, situation-action rules or, most often, if-then rules are created.The main elements of such databases are the facts entered.There are also rules based on heuristics, which make it possible to generate appropriate behaviors and decisions based on indirect information (facts) and the rules binding them.The main advantage of such solutions is the availability of a rich set of domain data describing in great detail the specificity or characteristics of objects or phenomena.Expert systems with the most knowledge are considered the best [2,14].
There are three main methods for extracting information from a knowledge base.One of these is used in the solution presented: Forward chaining: Based on the measurements, it is possible to determine the probability of plant or equipment failure.Such an analysis requires the facts to be read and processed.This allows a logical analysis and prediction of the consequences.Backward chaining: This approach allows you to explain an event or analyze symptoms by reading and processing the facts.This approach makes it is possible to analyze the causes of the events that have occurred.Event-driven: As data changes over time, the system should keep up.This mode is similar to the forward chaining approach.It allows to prediction of subsequent activities and decisions of the system, but the last state of the system affects the subsequent ones.This approach is used when operating in a real-time environment.
The problem of a good knowledge base is a key issue when introducing expert systems.The most important feature is the fact that the system is used by operators who do not have domain knowledge.In the system learning process, experts introduce a set of rules into the system, enriching it and thus increasing its efficiency.
The process of aggregating and maintaining knowledge is called knowledge engineering.It makes it possible to collect domain information and automate its application.So far, the knowledge possessed only by a limited number of experts can be used by anyone who has a tool to solve the problems, and the group of specialists expands the possibilities by introducing new facts and rules.
The export model and its position in the system are shown in Figure 2. The model consists of four modules: Machine learning model: this module contains a set of machine learning models that are responsible for forward and backward chaining and event-driven [5].The models aim to make appropriate quality predictions based on the accumulated knowledge in the database, tomographic data from impedance tomography and ultrasound and sensors measuring various physical quantities on the production line.Database: The database is built from various data sources derived from sensor tomographic data and predictions made by machine learning models.Measurement sensors: Their role is to collect the data appropriately and process it preintelligently [2, 14,16].The raw data may contain too much redundant information, which can generate information noise that misleads machine learning models.EIT and UST tomography: The role of tomography is to obtain accurate in-line distributions of the physico-chemical parameters of interest.Tomography (in addition to the two measurement techniques) consists of two components: measuring equipment algorithms that process the tomographic data [5].
base, tomographic data from impedance tomography and ultrasound and sensors measuring various physical quantities on the production line.Database: The database is built from various data sources derived from sensor tomographic data and predictions made by machine learning models.Measurement sensors: Their role is to collect the data appropriately and process it preintelligently [2, 14,16].The raw data may contain too much redundant information, which can generate information noise that misleads machine learning models.

EIT and UST tomography:
The role of tomography is to obtain accurate in-line distributions of the physico-chemical parameters of interest.Tomography (in addition to the two measurement techniques) consists of two components: measuring equipment algorithms that process the tomographic data [5].
The role of the developed expert model is to interpret the data from the different sensors and to create an intelligent feedback loop to optimise the production process in terms of energy efficiency and/or cost [2,11].
The expert model is part of the overall system; its location is shown in Figure 2. The user is able to specify key parameters, e.g., Key Performance Indicators.Key Performance Indicators (KPIs) are a set of quantifiable measurements used to assess the overall longterm performance of a business or production.The expert model is part of the overall system; its location is shown in Figure 2. The user is able to specify key parameters, e.g., Key Performance Indicators.Key Performance Indicators (KPIs) are a set of quantifiable measurements used to assess the overall long-term performance of a business or production.
The expert model's data, predictions and decisions can be reviewed and visualized through a user-accessible control panel.

Measurement Data Prediction
Data analysis and diagnosis are issues that require, among other things, the ability to deal with uncertainty in knowledge or data [17].Fortunately, the technique of combining evidence allows the calculation of approximations, which are often used in numerical methods.There are indicators to determine the strength of evidence based on a binary value.These are confidence coefficients (probability domain).Other methods are also used to assess uncertainty, including those based on fuzzy set theory using a probabilitybased approach.
Uncertainty can also be reduced by introducing additional data.A certain amount of redundancy of information significantly increases the reliability of the system.If incorrect beliefs are created in the knowledge base or incorrect lines of reasoning are established, we are dealing with incorrect or incomplete information being entered into the information system.In the worst case it leads to incorrect conclusions, which require re-entering the data and their justifications into the expert system.
Analysis allows complex problems to be broken down into smaller ones in different areas of the task.This is possible during design or diagnosis.The search space divided into subproblems becomes easier to explore.During analysis, it is possible to backtrack in the analysis tree.However, an exhaustive search method is often used so that backtracking is not necessary.
We can distinguish two approaches to backtracking.One of them is chronological backtracking, which is considered a trivial approach.The second is the dependency record-based approach.Intuitively, the so-called "generate and test" method is used.Chronological backtracking moves to the previous selection point.The second approach is based on dependencies and is called dependency-directed backtracking.
In the developed system, a decision tree was created.It is based on tomographic measurements, air humidity, temperature and object mass to decide whether the examined material in the tank does not meet the assumptions (reject), is accepted (accept) or generates a re-examination signal (return).The decision tree is modeled based on the following characteristics of the inspection process: mass of the object.-ambient temperature.-ambient humidity.-EIT (electrical impedance tomography) measurements.
The expected result classes of the process are: -Accept (accepted state-e.g., with adequate humidity).-Continue (set for further processing-e.g., does not meet required humidity parameters).-Alert (the system is in an alert state-e.g., significantly exceeds humidity parameters).
The decision tree under each node contains an observation-splitting rule.If the splitting rule is satisfied, then the branch to the left of the rule indicates the next test or terminal node (leaf).If the rule is not satisfied, the branch to the right indicates the next test or leaf.
In order to supply data to a decision tree, a signal monitoring algorithm has been built.Based on the readings from the time window, the signal readings are analysed, the behavior of the time series is modeled using statistical tests and, ultimately, "alarm" states are indicated.The algorithm is presented in the next section.

Proposed Algorithm
An algorithm for monitoring time signals was designed and implemented.The analysis of signal readings takes the following form (Figure 3) (process state prediction): For each moment t ∈ N, we analyze the behavior of the elements of the series x j max(0,t−m)≤j≤t [17,18]: If x max(0,t−m) = x max(0,t−m)+1 = . . .= x t then the sensor readings are identical, so we have no alarming indications, and we do not raise the alert.

2.
In many cases, the series' elements are randomly evaluated around a certain level.We use the Wald-Wolfowitz test [19] to verify randomness.If the elements of the series are uncorrelated [17,20,21] and the condition of homogeneity [22][23][24] of variance is met, then we assume that the series x j max(0,t−m)≤j≤t is a realization of a sequence of independent random variables with the same distribution, and therefore, we do not raise any alert.

3.
If the postulate of homogeneity [17,18,22,23] of variance based on the implementation of the sequence x j max(0,t−m)≤j≤t is not met, an alert should be raised due to the non-homogeneity of fluctuations around a certain level.In this case, we will present the series using GARCH [19] class models.

4.
If the elements evaluate randomly around a certain level and are correlated, then the series is presented using MA [18,25] class models, and we do not raise an alert.

5.
If based on the sample x j max(0,t−m)≤j≤t the randomness postulate is not met, we first check the stationarity property, which we verify using the ADF test [26,27].Fulfilling the stationarity postulate allows representing the series using ARMA [19,26] class models.This case has no alarming indications (we are not raising an alert).

6.
If the stationarity postulate is not met, then based on the implementation of x j max(0,t−m)≤j≤t the trend in the series should be identified [28].To verify the trend in time series, we use the tests [29][30][31][32][33] and model the trend using a polynomial [18,19] of a certain degree (if the degree is equal to one, there is a simple linear trend).We determine the degree of the polynomial using the differential method or by analyzing the linearity of the model, where predictors we select as successive powers of the variable (describing the impact) of time.Of course, the existence of a trend in the time series means that the oscillation level of the series (the expected value of the elements of the series) changes over time.An alert should be raised in this case.Based on information about the predicted system state, it is possible to determine the measurement frequency for selected sensors connected to the control system.If the result of the performed testing algorithms indicates the absence of alarm states, it is possible to limit the measurement frequency.Otherwise, especially when the ADF test or homoscedastic test indicates an alarm condition, a new measurement frequency parameter will be specified for the selected sensor.The maximum frequency value for the selected sensor and the change step are loaded in the first step.If the current value of the measurement frequency increased by a step is greater than the maximum value, the maximum possible value is set.In the second case, the frequency is increased by a given step (Figure 4).Based on information about the predicted system state, it is possible to determine the measurement frequency for selected sensors connected to the control system.If the result of the performed testing algorithms indicates the absence of alarm states, it is possible to limit the measurement frequency.Otherwise, especially when the ADF test or homoscedastic test indicates an alarm condition, a new measurement frequency parameter will be specified for the selected sensor.The maximum frequency value for the selected sensor and the change step are loaded in the first step.If the current value of the measurement frequency increased by a step is greater than the maximum value, the maximum possible value is set.In the second case, the frequency is increased by a given step (Figure 4).This solution allows for the dynamic selection of metrological parameters, significantly reducing the operating time of devices, which is emphasized by the research results.This solution allows for the dynamic selection of metrological parameters, significantly reducing the operating time of devices, which is emphasized by the research results.

Results
In order to verify the created computer measurement and control system, a number of time-consuming and complex tests were performed, both using models and theoretical analysis as well as real devices.The key study was to determine the multi-sensor measurement system's stability and the maximum values of measurement frequencies of selected sensors.In the second part of the research, complex models of the measurement system were made, along with an analysis of the stability of the sensor measurement frequency control system.
To obtain a satisfactory result for testing the stability of the entire system, several tests were carried out, taking into account individual devices in the system.The following test scenario was planned: testing the stability of the wireless sensor system-10 h of operation.
Following previously prepared scenarios, stability tests for individual devices were performed.The results of the conducted research are presented below.The test of the operational stability of the sensor system consisted of a long-term examination of the correctness of the measurements obtained by the sensors involved in the test.The study lasted approximately ten hours and used three temperature sensors, each configured to operate with an identical measurement interval.The measurements' correctness was checked every ten minutes, resulting in approximately 60 individual tests of the stability of the wireless sensor system.
If a given sensor operated correctly, a measurement value was obtained, and zero was received if there was no communication with a given sensor.In order to standardize measurement conditions, all sensors were placed in one place near each other.The results of the stability test of the wireless sensor system are presented in the chart below (Figure 5).

Results
In order to verify the created computer measurement and control system, a number of time-consuming and complex tests were performed, both using models and theoretical analysis as well as real devices.The key study was to determine the multi-sensor measurement system's stability and the maximum values of measurement frequencies of selected sensors.In the second part of the research, complex models of the measurement system were made, along with an analysis of the stability of the sensor measurement frequency control system.
To obtain a satisfactory result for testing the stability of the entire system, several tests were carried out, taking into account individual devices in the system.The following test scenario was planned: testing the stability of the wireless sensor system-10 h of operation.
Following previously prepared scenarios, stability tests for individual devices were performed.The results of the conducted research are presented below.The test of the operational stability of the sensor system consisted of a long-term examination of the correctness of the measurements obtained by the sensors involved in the test.The study lasted approximately ten hours and used three temperature sensors, each configured to operate with an identical measurement interval.The measurements' correctness was checked every ten minutes, resulting in approximately 60 individual tests of the stability of the wireless sensor system.
If a given sensor operated correctly, a measurement value was obtained, and zero was received if there was no communication with a given sensor.In order to standardize measurement conditions, all sensors were placed in one place near each other.The results of the stability test of the wireless sensor system are presented in the chart below (Figure 5).
Considering the data presented in the above graph, it can be concluded that the wireless sensor system is characterized by high operational stability.Only one of the sensors stopped sending measurements after about eight hours of continuous operation, but this was due to the battery running out.This situation does not give rise to a lack of stability in the entire system's operation.The other two sensors returned the correct measurement in every attempt during the entire test, meaning they were 100% effective.This proves the very high operational stability of the entire wireless sensor system.
In the second study, nine sensors of various types were used, each set to the shortest possible operation interval, i.e., 20 ms.This value results from the specific operation of the advertising mechanism in BLE technology.The set interval value gives the theoretical operating frequency of each sensor at 50 Hz.However, the actual frequency will be lower due to the packet collision limiting mechanism in BLE technology.This mechanism involves adding a random delay from 0-10 ms to the broadcast interval.This means that the actual broadcast interval will be in the range of 20-30 ms.Therefore, the operating frequency of each sensor should range from 33-50 Hz.Considering the assumed test time of 60 s, each sensor should provide from 2000 to 3000 unique measurements.The results of the tests carried out on the performance of wireless sensors are presented in the chart below (Figure 6).Considering the data presented in the above graph, it can be concluded that the wireless sensor system is characterized by high operational stability.Only one of the sensors stopped sending measurements after about eight hours of continuous operation, but this was due to the battery running out.This situation does not give rise to a lack of stability in the entire system's operation.The other two sensors returned the correct measurement in every attempt during the entire test, meaning they were 100% effective.This proves the very high operational stability of the entire wireless sensor system.
In the second study, nine sensors of various types were used, each set to the shortest possible operation interval, i.e., 20 ms.This value results from the specific operation of the advertising mechanism in BLE technology.The set interval value gives the theoretical operating frequency of each sensor at 50 Hz.However, the actual frequency will be lower due to the packet collision limiting mechanism in BLE technology.This mechanism involves adding a random delay from 0-10 ms to the broadcast interval.This means that the actual broadcast interval will be in the range of 20-30 ms.Therefore, the operating frequency of each sensor should range from 33-50 Hz.Considering the assumed test time of 60 s, each sensor should provide from 2000 to 3000 unique measurements.The results of the tests carried out on the performance of wireless sensors are presented in the chart below (Figure 6).The collected data shows that the highest unit efficiency during the tests was 2543 measurements/minute, and the lowest was 1878 measurements/minute.The differences may result from the different signal strength transmitted by individual sensors.This may  Considering the data presented in the above graph, it can be concluded that the wireless sensor system is characterized by high operational stability.Only one of the sensors stopped sending measurements after about eight hours of continuous operation, but this was due to the battery running out.This situation does not give rise to a lack of stability in the entire system's operation.The other two sensors returned the correct measurement in every attempt during the entire test, meaning they were 100% effective.This proves the very high operational stability of the entire wireless sensor system.
In the second study, nine sensors of various types were used, each set to the shortest possible operation interval, i.e., 20 ms.This value results from the specific operation of the advertising mechanism in BLE technology.The set interval value gives the theoretical operating frequency of each sensor at 50 Hz.However, the actual frequency will be lower due to the packet collision limiting mechanism in BLE technology.This mechanism involves adding a random delay from 0-10 ms to the broadcast interval.This means that the actual broadcast interval will be in the range of 20-30 ms.Therefore, the operating frequency of each sensor should range from 33-50 Hz.Considering the assumed test time of 60 s, each sensor should provide from 2000 to 3000 unique measurements.The results of the tests carried out on the performance of wireless sensors are presented in the chart below (Figure 6).The collected data shows that the highest unit efficiency during the tests was 2543 measurements/minute, and the lowest was 1878 measurements/minute.The differences may result from the different signal strength transmitted by individual sensors.This may The collected data shows that the highest unit efficiency during the tests was 2543 measurements/minute, and the lowest was 1878 measurements/minute.The differences may result from the different signal strength transmitted by individual sensors.This may be caused by, e.g., insufficient voltage of the power supply battery or differences in the antenna paths of the transmitting and receiving system.The average performance during the test was approximately 2146 unique measurements, which gives approximately 36 measurements per second.The obtained value is within the adopted assumptions and proves the correct operation of the entire system.
After obtaining confirmation of the stability of the measurement system, tests of the alarm detection algorithm developed as part of the research were performed.The first tests were performed on data from simulations.The results are presented in the graphics below (Figures 7-12).
the test was approximately 2146 unique measurements, which gives approximately 36 measurements per second.The obtained value is within the adopted assumptions and proves the correct operation of the entire system.
After obtaining confirmation of the stability of the measurement system, tests of the alarm detection algorithm developed as part of the research were performed.The first tests were performed on data from simulations.The results are presented in the graphics below (Figures 7-12).Energies (a)

Discussion
By analyzing the behavior of the time series, we examine the fulfillment of the stationarity postulate.For stationary series, the joint distributions with a time shift are identical, which means that with a time shift, the dynamics of the series remain unchanged.The developed algorithm allows for the detection of various types of non-stationarity (more precisely, detection of the dynamics of changes) occurring in time series, namely the detection of trends, integration and heterogeneity of variances of fluctuations.If heterogeneity is detected using the algorithm, we additionally identify the behavior of the time series by selecting an appropriate model.
The Ljung-Box [23] test was used to detect relationships between time series elements.By analyzing the behavior of the elements x j t−m≤j≤t that oscillate around a certain level, the homogeneity of these fluctuations is verified (i.e., the question must be answered whether the fluctuations are different over time).To verify heteroskedasticity, we use the White test [22] (1980), the Breusch-Pagan test [23] (1979) or the Goldfeld-Quandt test [24] (1965).We use the Augmented Dickey-Fuller test to detect non-stationarity or integration in the series.
If non-stationarity is detected using the ADF test in the x j t−m≤j≤t time series, we adjust the trend in this series with a polynomial of the appropriate degree [17,28,[31][32][33].
During the research, tests were performed on the devices, and their stability and the operation of the developed algorithms were verified.
A reduction in the energy required to perform complex measurements was achieved.Graphic 12 shows the energy reduction for the temperature sensor.Analogously, the algorithm was used for other types of sensors.In particular, such optimization is important for tomographic sensors, where one study requires a series of expensive unit measurements between subsequent pairs of electrodes of the device.The presented results confirm the effectiveness of the proposed method, and further research is necessary to determine better process parameters to increase the effectiveness of the developed solution.

Conclusions
Renewable energy sources such as biogas plants are becoming increasingly used, and therefore, more and more scientific works are concerned with increasing the efficiency of created installations.One aspect of proper operation is the appropriate control of the biological and chemical processes, where the amount of energy needed to ensure continuous operation of the biogas production system plays an important role.The article presents the essence of the problem, placing the research in the context of other scientific works.A new method was proposed to reduce energy consumption for the indicated installations.The approach assumes using a multi-sensor platform based on electrical process tomography.Algorithms have been designed and implemented to track changes in various process parameters over time, which allows for the dynamic selection of measurement parameters such as the monitoring frequency, which makes it possible to reduce energy consumption, thereby increasing the efficiency of the entire process.Artificial intelligence algorithms were used to complete the task, including expert systems with data analysis.The solution offers many possibilities, but further research is still necessary.The authors of the article have prepared a solution for general sensors, but in the future, they plan to consider preparing a system for other, more specific sensors in the fermentation process, thus creating a more advanced solution based on sophisticated tomographic methods.This still requires additional work to prepare new sensors and perform measurements, calculations and tests.

Energies 2023 , 18 Figure 1 .
Figure 1.Energy consumption analysis for temperature and humidity sensor (battery-based power supply).

Figure 1 .
Figure 1.Energy consumption analysis for temperature and humidity sensor (battery-based power supply).

Figure 2 .
Figure 2. Diagram showing the connection of the expert model to the biogas production line and control panel.

Figure 2 .
Figure 2. Diagram showing the connection of the expert model to the biogas production line and control panel.The role of the developed expert model is to interpret the data from the different sensors and to create an intelligent feedback loop to optimise the production process in terms of energy efficiency and/or cost[2,11].The expert model is part of the overall system; its location is shown in Figure2.The user is able to specify key parameters, e.g., Key Performance Indicators.Key Performance Indicators (KPIs) are a set of quantifiable measurements used to assess the overall long-term performance of a business or production.The expert model's data, predictions and decisions can be reviewed and visualized through a user-accessible control panel.

The Breusch - 18 Figure 3 .
Figure 3. Measurement frequency adjustment based on the signal analysis and non-stationarity detection algorithm.

Figure 3 .
Figure 3. Measurement frequency adjustment based on the signal analysis and non-stationarity detection algorithm.

Figure 4 .
Figure 4.The concept of signal processing in the time domain.Use of a rectangular time window without overlapping; subseries used as an input to trend detection algorithm.

Figure 4 .
Figure 4.The concept of signal processing in the time domain.Use of a rectangular time window without overlapping; subseries used as an input to trend detection algorithm.

Energies 2023 , 18 Figure 5 .
Figure 5. Measurement system stability during ten-hour operation of several wireless temperature sensors.Values equal to 0 mean communication errors.

Figure 6 .
Figure 6.The performance of selected sensors used in the measurement infrastructure.

Figure 5 .
Figure 5. Measurement system stability during ten-hour operation of several wireless temperature sensors.Values equal to 0 mean communication errors.

Figure 5 .
Figure 5. Measurement system stability during ten-hour operation of several wireless temperature sensors.Values equal to 0 mean communication errors.

Figure 6 .
Figure 6.The performance of selected sensors used in the measurement infrastructure.

Figure 6 .
Figure 6.The performance of selected sensors used in the measurement infrastructure.

Figure 7 .
Figure 7. Trend detection with a selected time window in simulated time series; m = 100; (a,b) first two sections without alert; (c,d) middle sections for analyzed signal with the trend; (e,f) last two sections of the time series with detected trend.In the case of non-stationarity, the task is to identify the trend, which is shown by the red line in the graphs.

Figure 7 .
Figure 7. Trend detection with a selected time window in simulated time series; m = 100; (a,b) first two sections without alert; (c,d) middle sections for analyzed signal with the trend; (e,f) last two sections of the time series with detected trend.In the case of non-stationarity, the task is to identify the trend, which is shown by the red line in the graphs.

Figure 8 .
Figure 8. Analysis of real data: time series trend analysis (humidity sensor) for m = 50.From top left: stable measurements; stationary time series; trend existence; stationary time series; no alerts, stable measurements; trend existence.

Figure 9 .
Figure 9. Analysis of real data: time series trend analysis (temperature sensor) for m = 200.From top left: stationary time series; trend existence; trend existence; stationary time series; trend existence; trend existence.

Figure 8 . 18 Figure 8 .
Figure 8. Analysis of real data: time series trend analysis (humidity sensor) for m = 50.From top left: stable measurements; stationary time series; trend existence; stationary time series; no alerts, stable measurements; trend existence.

Figure 9 .
Figure 9. Analysis of real data: time series trend analysis (temperature sensor) for m = 200.From top left: stationary time series; trend existence; trend existence; stationary time series; trend existence; trend existence.

Figure 9 .
Figure 9. Analysis of real data: time series trend analysis (temperature sensor) for m = 200.From top left: stationary time series; trend existence; trend existence; stationary time series; trend existence; trend existence.

Figure 12 .
Figure 12.Energy cost reduction for temperature sensors; the sampling frequency decreases when the parameter determining the system state is less than or equal to v = 20.

Figure 12 .
Figure 12.Energy cost reduction for temperature sensors; the sampling frequency decreases when the parameter determining the system state is less than or equal to v = 20.