Analysis of the Possibility of Making a Digital Twin for Devices Operating in Foundries

: This work aims to conduct an analysis to find opportunities for the implementation of software incorporating the concept of digital twins for foundry work. Examples of implementations and their impact on the work of enterprises are presented, as is a definition and history of the concept of a digital twin. The outcome of this work is the implementation of software that involves a digital copy of the author’s device, created by the “Łukasiewicz” Research Network at the Krakow Institute of Technology. The research problem of this scientific work is to reduce the number of necessary physical tests on real objects in order to find a solution that saves time and energy when testing the thermal expansion of known and new metal alloys. This will be achieved by predicting the behavior of the sample in a digital environment and avoiding causing it to break in reality. Until now, after an interruption, the device often continued to operate and collect data even though no current was flowing through the material, which could be described as inefficient testing. The expected result will be based on the information and decisions obtained by predicting values with the help of a recurrent neural network. Ultimately, it is intended to predict the condition of the sample after a set period of time. Thanks to this, a decision will be made, based on which the twin will know whether it should automatically end its work, disconnect the power or call the operator for the necessary interaction with the device. The described software will help the operator of a real machine, for example, to operate a larger number of workstations at the same time, without devoting all their attention to a process that may last even for hours. Additionally, it will be possible to start work on selecting the chemical composition of the next material sample and plan its testing in advance. The machine learning handles model learning and value prediction with the help of artificial neural networks that were created in Python. The application uses historical test data, additionally retrieves current information, presents it to the user in a clear modern form and runs the provided scripts. Based on these, it decides on the further operation of the actual device.


Introduction
A digital twin (DT) is the idea of software or an entire system in which the main and most important feature is the mapping of a physical object in the form of its digital representation in the most comprehensive way possible, containing all possible knowledge about the real object [1].Such an application includes not only the technical data contained in the specification but also complex behavioral models, thanks to which it is possible to simulate and predict the results of the processes performed [2].The concept of a DT is

•
DT of a stand-alone unit for growing fish and vegetables using the aquaponics method-This running prototype of the twin uses information about the temperature, light intensity, pH and salts dissolved in the water to create a simulation of how the fish should be fed, all in order to find the optimal behavior of the entire system.This is conducted by determining the ideal changes in fish weight and aquatic plant growth to maximize the finished products while finding water savings, minimizing waste production and maintaining quality standards and secondary production goals.

•
DT of bee families-a twin introduced to monitor nectar, prevent pests and maintain health among individuals.It is based on GPS data with sensors of the humidity, temperature inside and outside the apiary and the weight of new creatures in the hatchery.Beekeepers monitor the hives in real time, make decisions about interaction, remotely determine food doses and identify anomalies, hygiene and the colony life stages.At the same time, they minimize contact between bees and humans, without interfering with natural processes.

•
DT tractors-Modern tractors are equipped with modern and autonomous interfaces with which they can be integrated to achieve even greater benefits.This twin was introduced to actively monitor the device's health and prevent faults.In this way, downtime is minimized, which significantly determines the size of the harvest, and therefore the farmers' earnings.
Summarizing the presented examples of current implementations of digital twins, it can be established that they work mainly using object modeling and analysis of historical and real-time data.At the same time, the analyzed articles showed that DTs are currently most often used for monitoring real parts of a system, predicting errors and failures, integrating several existing technologies and conducting process simulations in order to gain energy or time by finding the optimal parameters and predicted behaviors.
The aim of the scientific work is to reduce the number of necessary physical tests on real objects in order to find a solution that saves time and energy when testing the thermal expansion of known and new metal alloys.The new materials are tested to avoid the failure of the foundry machines using parts made from the new material, by predicting the break moment using a learning algorithm.This will be achieved by predicting the behavior of the sample in the digital environment and avoiding causing it to break in reality.Until now, after an interruption, the device often continued to operate and collect data even though no current was flowing through the material, which could be described as inefficient testing.The expected result will be based on the information and decisions obtained by predicting values with the help of a recurrent neural network.Ultimately, it is intended to predict the condition of the sample after a set period of time.Thanks to this, a decision will be made, from which the twin will know whether it should automatically end the work, disconnect the power or call the operator for the necessary interaction with the device.The described software will help the operator of a real machine, for example, to operate a larger number of workstations at the same time without devoting all their attention to a process that may last even for hours.Additionally, it will be possible to start work on selecting the chemical composition of the next material sample and plan its testing in advance.The application we are creating will be an example of a digital twin, which means that it has several functions, such as simulating the behavior of a real device, performing two-way data exchange between the real and digital counterpart and additionally serving as a convenient interface for data management and review of historical information and the current parameters.

Materials and Methods
The research problem of this scientific work is to reduce the number of necessary physical tests on real objects in order to find a solution that saves time and energy when testing the thermal expansion of known and new metal alloys.This will be achieved by predicting the behavior of the sample in the digital environment and avoiding causing it to break in reality.Additionally, it will be possible to start work on selecting the chemical composition of the next material sample and plan its testing earlier.In order to ensure its greatest universality, it was decided to implement a web application, available on various devices anywhere in the production hall that have access to the Internet, but also outside it, e.g., during remote work.Thanks to this, the end device does not need much computing power because it only serves as an interface for calling calculations on the server.Additionally, the created software, after its integration with a production machine, may ultimately become a fully automatic and independent system that only needs to be supervised from the outside and not actively controlled, as is currently the case.To sum up, the created project will consist of several separate elements: a user interface available in a web browser, a server managing calculations and responsible for communication, a set of scripts solving current production problems and a database of information collected, so far in the form of files with little popularity.

Data Analysis for Algorithms
The data used in this work come from an original device located with the Łukasiewicz Research Network at the Krakow Institute of Technology for testing the resistance to thermal fatigue of metallic materials.The diagram of this device included in the documentation is shown in Figure 1.The test is carried out by measuring the forces acting on the frame during the cyclic heating and cooling of metal samples due to a current passing through them.The measurements begin with the initial configuration of the device, during which the maximum and minimum temperature limits to be reached by the sample (the available range from 0 to 1000 • C allows testing most popular iron alloys) and the current with which the equipment will operate (currently up to 330 A, with the possibility of increasing this value to 1000 A) are selected.
devices anywhere in the production hall that have access to the Internet, but a it, e.g., during remote work.Thanks to this, the end device does not need much power because it only serves as an interface for calling calculations on Additionally, the created software, after its integration with a production ma ultimately become a fully automatic and independent system that only n supervised from the outside and not actively controlled, as is currently the ca up, the created project will consist of several separate elements: a user interfa in a web browser, a server managing calculations and responsible for comm set of scripts solving current production problems and a database of informatio so far in the form of files with little popularity.

Data Analysis for Algorithms
The data used in this work come from an original device located Łukasiewicz Research Network at the Krakow Institute of Technology for resistance to thermal fatigue of metallic materials.The diagram of this device the documentation is shown in Figure 1.The test is carried out by measurin acting on the frame during the cyclic heating and cooling of metal samples due passing through them.The measurements begin with the initial configura device, during which the maximum and minimum temperature limits to be the sample (the available range from 0 to 1000 °C allows testing most popular and the current with which the equipment will operate (currently up to 330 possibility of increasing this value to 1000 A) are selected.This process is quite labor-intensive because its time can be counted depending on the metal alloy used and the experiment parameters.The test ends as a result of the operator's intervention, and the sensors placed in t record eight values at second intervals with high accuracy, even up to the nin place, which are sent to the computer via an NI cDAQ data acquisition system as files with the extension .tdms.They can be read without specialized softwar a free extension to the popular Microsoft Excel spreadsheet, and then we c This process is quite labor-intensive because its time can be counted in hours, depending on the metal alloy used and the experiment parameters.The test begins and ends as a result of the operator's intervention, and the sensors placed in the machine record eight values at second intervals with high accuracy, even up to the ninth decimal place, which are sent to the computer via an NI cDAQ data acquisition system and saved as files with the extension .tdms.They can be read without specialized software, but using a free extension to the popular Microsoft Excel spreadsheet, and then we can save the section of data we are interested in in the form of a classic .csvfile, thanks to which the user can read the following information: The columns contain data reflecting the mechanical properties of the sample in subsequent test cycles (Figure 2), and also show a certain tendency just before the material breaks.Additionally, taking into account that these data constitute time series, several features were obtained that allow the appropriate selection of the type of neural network that can be used to analyze variability and perform regression.The columns contain data reflecting the mechanical properties of the sample in subsequent test cycles (Figure 2), and also show a certain tendency just before the material breaks.Additionally, taking into account that these data constitute time series, several features were obtained that allow the appropriate selection of the type of neural network that can be used to analyze variability and perform regression.

Prediction Algorithms
When looking for a way to predict values, you need to start the process by finding correlations between data.When analyzing the data and the goal we want to achieve, the following techniques were considered:  Classic method of time-series analysis with autocorrelation (e.g., ARIMA) [9],  Dynamic time transformion/Dynamic Time Warping (DTW) [10],  Artificial neural networks (e.g., RNN, LSTM) [11,12].
Due to the fact that the project should focus on the programming side, and the fact that a way is sought to modernize the use of machines that have been in use for many years, the last of the mentioned methods was chosen, which is modern and has been growing in popularity in recent years due to its implementation in various areas of artificial intelligence.Sequentially arranged data that show, second by second, the changes occurring in the tested metal sample are the correct input element for the recurrent network, which will look for the answer to the question "what will be the subsequent numerical values achieved by the material?"An important feature of recurrent networks is a kind of "memory", which is actually another input source to the

Prediction Algorithms
When looking for a way to predict values, you need to start the process by finding correlations between data.When analyzing the data and the goal we want to achieve, the following techniques were considered:
Due to the fact that the project should focus on the programming side, and the fact that a way is sought to modernize the use of machines that have been in use for many years, the last of the mentioned methods was chosen, which is modern and has been growing in popularity in recent years due to its implementation in various areas of artificial intelligence.Sequentially arranged data that show, second by second, the changes occurring in the tested metal sample are the correct input element for the recurrent network, which will look for the answer to the question "what will be the subsequent numerical values achieved by the material?"An important feature of recurrent networks is a kind of "memory", which is actually another input source to the neuron containing information on the previous element in the examined sequence.This structure of the node means that recurrent neural networks (RNNs) are used for tasks related to, among other things, analyzing numbers in series, finding points creating a trajectory in two-and three-dimensional spaces, defining the subsequent image pixels that will begin to smoothly change its color by interpreting the of a natural language in sentences, as well as numbers showing certain trends over the analyzed period, e.g., company quotations on the stock exchange.The type of recurrent network selected for the project is the Long Short-Term Memory (LSTM) network.It is a modified approach to a recurrent network due to its expanded memory cell.It can store more information than the single value before the one currently being analyzed in the series.In LSTM, these are data strings of a user-selected length, as well as data that were analyzed even several network nodes earlier.Forget Gate is a characteristic element of the node for the LSTM network, which decides whether and how much previous information will be taken into account in the next stage of network training, and which should be hidden from the emerging model.

API and UI
server part will be run and on whose disk some information will be stored.The rest are located in the database, which was launched in the cloud version during the creation process.However, there is nothing stopping them from being located in the local database instance.The same device, if it has a graphical interface, can also run a website with a front-end application in a browser.The implementation details, depicted graphically, can be found in Figure 3.

Results
The aim of the developed solution was to assess how the created digital twin of the machine testing resistance to thermal expansion would contribute to solving the problem of sample breakage or not.
The first step in achieving the results is selecting the type of material that will be analyzed from the selection list saved in the database, and then placing this data the server by sending a file in .csvformat.Thanks to this, when you go to the script launch screen for creating a new model, the latest resources will be available.The completed form will be sent to the server, where the Python script will be run.The training phase of the artificial neural network itself may last up to several hours depending on the selected parameters and the number of lines in the provided file, which is why the models have been divided according to the type of material, which is to ensure their repeated use for samples behaving similarly during heating and cooling, most often belonging to one type of material, in this case a type of cast iron (e.g., ordinary gray, gray vermicular, gray ductile).Running the prediction script is completed by selecting the sample type, narrowing down the available lists of values, then indicating the parameters of the artificial neural network that will be delivered to the script and finally selecting one of the previously created models.When the prediction is completed, information on its results is sent to the user and will also be saved in the table with the execution history.On the history screen, you can also check the results in a graphical version.The data prediction code performs this in about one or two seconds, but there are also delays resulting from

Results
The aim of the developed solution was to assess how the created digital twin of the machine testing resistance to thermal expansion would contribute to solving the problem of sample breakage or not.
The first step in achieving the results is selecting the type of material that will be analyzed from the selection list saved in the database, and then placing this data on the server by sending a file in .csvformat.Thanks to this, when you go to the script launch screen for creating a new model, the latest resources will be available.The completed form will be sent to the server, where the Python script will be run.The training phase of the artificial neural network itself may last up to several hours depending on the selected parameters and the number of lines in the provided file, which is why the models have been divided according to the type of material, which is to ensure their repeated use for samples behaving similarly during heating and cooling, most often belonging to one type of material, in this case a type of cast iron (e.g., ordinary gray, gray vermicular, gray ductile).Running the prediction script is completed by selecting the sample type, narrowing down the available lists of values, then indicating the parameters of the artificial neural network that will be delivered to the script and finally selecting one of the previously created models.When the prediction is completed, information on its results is sent to the user and will also be saved in the table with the execution history.On the history screen, you can also check the results in a graphical version.The data prediction code performs this in about one or two seconds, but there are also delays resulting from the time of running the code in the Node environment, as well as data transfer, so in a real case, using the created interface, the results will be possible to read after about five or six seconds.After completing the execution of the code written in Python, the server reads its results from the logs saved to the text file, and then issues a decision according to the formula to suggest stopping the operation of the device or continuing it.

Implementation of Scripts Using Artificial Intelligence
The implementation of LSTM artificial neural network layers in the Keras framework requires appropriate data preparation to start training.They must take the form of a three-dimensional array described as follows: [Samples × Timesteps × Features], where: • Samples-time frames with the data currently taken into account, e.g., one hundred seconds • Timesteps-data in specific seconds, i.e., subsequent lines of the .csvfile • Features-individual data of each row that we want to use to fit the model.
At the same time, in addition to the strict layout of the input data, the array cannot have components with arbitrary lengths because by downloading too much or too little data when adjusting the weights on its nodes, it will not be able to correctly calculate the error gradient, and there is a possibility of the phenomenon of "exploding" gradients or their " fading away."This will only be noticeable during the training phase, often already during the first epoch, when the current error value will be described as NaN (Not a Number).If the data are cyclical, it is recommended to create time frames of a maximum of a few such cycles; in the case of the code created in this project, the results were obtained with a frame length of approximately five cycles.Nor can one expect correct results in predicting values that are too forward-looking, but one cycle length can be safely considered to be very likely to be obtained and present satisfactory results.
However, before such a structure is created, there is a phase of data pre-processing, in which they are read from the .csvfile, making sure that the characters separating the decimal part and subsequent columns are correctly distinguished, the numbers are not rounded or shortened (in the case in question, the smallest changes in values are important) and that non-numerical values that we do not want to analyze will be recognized.Subsequently, the values are standardized using the imported library function according to Formula (1). where: z-is the new value, x-currently read from the data table, u-average value, s-standard deviation.
Then, we determine which data we will treat as input and which column as output.The code you create takes the extension as a target and the remaining columns as input.Next comes the definition of the model.Both what is just being created and what is being loaded have the same set layers.First, there is an LSTM network layer with eight hidden states, passing the entire sequence of hidden states to the next layer (thanks to the return_sequences parameter).The next layer has only two hidden states and returns only the last hidden state.The network is closed with a Dense layer, thanks to which we receive a single value at the network output.The activation function used in the network is "relu" (Rectified Linear Unit), which is the most frequently used function in artificial intelligence models.When completing the network configuration, it was decided that the most frequently chosen "Adam" optimizer would be used to improve the learning process, and the error determining the quality of the model would be of the Mean Square Error (MSE) type.Depending on the script, the training result of the previously selected model is loaded or a new one is generated thanks to the provided number of epochs, and the input data set is divided into training and testing in the proportion 80:20.By contacting the representative of the "Łukasiewicz" Research Network at Krakow Technical Institute, who provided the data used in this work, as well as the documentation on and diagram of the device, a condition was also established that would determine whether the application decided that the actual test should be interrupted or could be continued because sample rupture would not occur within the prescribed time.For this purpose, the difference Electronics 2024, 13, 349 9 of 12 between the average value of the data from the file taken into account when predicting the value and the average value among the prediction results is calculated.If they differ by at least ten percent, a suggestion is made to discontinue the further operation of the device; otherwise, the inquirer is informed that the sample will probably not be damaged within the expected period.

Results Analysis
In order to check the effectiveness and repeatability of the obtained results, three models were made based on three sets of data, selecting different parameters of the network itself.Their characteristics are presented in Table 1 in order to clearly compare their properties.Then, two processes were run for each of them to predict the results of the sample continuity test, to check whether they could predict both possible results, regardless of the data set used during their training.Initially, the number of epochs for training the C model was 25, and the process of its creation took over two hours.However, another training period was carried out for the same parameters and data, changing only the number of epochs to eight.This was due to the fact that in the graph presenting the changes in the learning error during the creation of the neural network model, subsequent epochs from about the eighth epoch did not bring a significant improvement in the quality of the model (Figure 4).An additional advantage of this test is confirmation of the repeatable operation of the code thanks to the comparable values at subsequent stages of the process.
difference between the average value of the data from the file taken into account when predicting the value and the average value among the prediction results is calculated.If they differ by at least ten percent, a suggestion is made to discontinue the further operation of the device; otherwise, the inquirer is informed that the sample will probably not be damaged within the expected period.

Results Analysis
In order to check the effectiveness and repeatability of the obtained results, three models were made based on three sets of data, selecting different parameters of the network itself.Their characteristics are presented in Table 1 in order to clearly compare their properties.Then, two processes were run for each of them to predict the results of the sample continuity test, to check whether they could predict both possible results, regardless of the data set used during their training.Initially, the number of epochs for training the C model was 25, and the process of its creation took over two hours.However, another training period was carried out for the same parameters and data, changing only the number of epochs to eight.This was due to the fact that in the graph presenting the changes in the learning error during the creation of the neural network model, subsequent epochs from about the eighth epoch did not bring a significant improvement in the quality of the model (Figure 4).An additional advantage of this test is confirmation of the repeatable operation of the code thanks to the comparable values at subsequent stages of the process.For the main application test, two data files were prepared to check the results.The first one is file 1, used to create model A, the data of which have been slightly modified.The entire test lasted less than two hours, but before the measurement was stopped, the sample was interrupted 20 minutes earlier.Therefore, unnecessary data were removed, as well as an additional hundred seconds to check whether the generated neural networks could predict the actual effect.Ultimately, we obtained a file containing data collected for approximately 1.5 h.Further in the work, it will be marked as Test File 1-PT1.The next data are the original version of the information used as input to the C model, i.e., file 3, which will be referred to in the text as Test File 2-PT2.It includes a six-hour test that did not end with the interruption of the sample, which was subjected to subsequent tests at a later time.This and extended information about the data used in the test can be found in Table 2.The model marked with the letter A, in combination with the PT1 file, i.e., the one on which it was based before its modifications, predicted correctly, with the result being sample interruption.The second test showed that interruption did not occur and would not occur in the next hundred seconds in the future.The course of the last elongation of the sample during the last five hundred seconds is marked in blue in the graph presented in Figure 5. Orange denotes the data that have been generated by artificial intelligence.Even though the graphs do not connect and are shifted on the axis, the condition for stopping the device that was adopted takes into account only the average difference in average values, not specific values, so we consider them correct.
The entire test lasted less than two hours, but before the measurement was stopped, the sample was interrupted 20 minutes earlier.Therefore, unnecessary data were removed, as well as an additional hundred seconds to check whether the generated neural networks could predict the actual effect.Ultimately, we obtained a file containing data collected for approximately 1.5 h.Further in the work, it will be marked as Test File 1-PT1.The next data are the original version of the information used as input to the C model, i.e., file 3, which will be referred to in the text as Test File 2-PT2.It includes a six-hour test that did not end with the interruption of the sample, which was subjected to subsequent tests at a later time.This and extended information about the data used in the test can be found in Table 2.The model marked with the letter A, in combination with the PT1 file, i.e., the one on which it was based before its modifications, predicted correctly, with the result being sample interruption.The second test showed that interruption did not occur and would not occur in the next hundred seconds in the future.The course of the last elongation of the sample during the last five hundred seconds is marked in blue in the graph presented in Figure 5. Orange denotes the data that have been generated by artificial intelligence.Even though the graphs do not connect and are shifted on the axis, the condition for stopping the device that was adopted takes into account only the average difference in average values, not specific values, so we consider them correct.The effects of the prediction script using model B, which did not see the data from the test data package during the learning phase of the artificial neural network, are shown in Figure 6, and again the values obtained are correct.Similarly to model A, it predicted that the sample from the PT1 file would break in the future, and that the sample whose breakage had not been observed would not break further (PT2 file).The effects of the prediction script using model B, which did not see the data from the test data package during the learning phase of the artificial neural network, are shown in Figure 6, and again the values obtained are correct.Similarly to model A, it predicted that the sample from the PT1 file would break in the future, and that the sample whose breakage had not been observed would not break further (PT2 file).The last model tested again achieved only correct results for both of the provided data sets.It was prepared using input data from the PT2 file, but according to the examples presented, this did not prevent it from making predictions based on other data.Its results, marked with two colors, as in the previous figures, are presented in the charts in Figure 7.Even though the generated data combined with the provided data did not always create a continuous graph, they presented satisfactory results.You cannot expect perfect results because predictions are always subject to error.In this case, it was a difference in the scale not the shape (tendency) of the data in the new cycles calculated.

Discussion and Conclusions
Currently, many enterprises have decided to improve their market standing by modernizing their existing equipment or searching for new operating parameters.The machines used in foundries also need such changes, so various types of ready-made and specially created software are tested.The described case of a device for testing the thermal expansion of metal samples and the proposed implementation of its digital twin only confirms that a gain in time and energy is possible.Instead of many minutes being wasted when the device is idle or the next cycle is interrupted, you can find out about the probable situation in a few seconds, and reach a decision that will make the process more automated.This is due to the latest technologies, the development of which has significantly accelerated in recent years.Using complex algorithms, precise sensors and high computing power, humans are able to improve the efficiency of even simple classic industrial tools.After analyzing the requirements of the application, which were fully met, the created system uses several programming languages and many libraries containing useful code, and also provides a universal interface, creating a multifunctional program that is easy to expand with new purposes in future, which will be suggested in more detail in the next body of work.The artificial neural network made predictions that thus far have been made using classic time-series analysis methods, and its results turned The last model tested again achieved only correct results for both of the provided data sets.It was prepared using input data from the PT2 file, but according to the examples presented, this did not prevent it from making predictions based on other data.Its results, marked with two colors, as in the previous figures, are presented in the charts in Figure 7.The last model tested again achieved only correct results for both of the provided data sets.It was prepared using input data from the PT2 file, but according to the examples presented, this did not prevent it from making predictions based on other data.Its results, marked with two colors, as in the previous figures, are presented in the charts in Figure 7.Even though the generated data combined with the provided data did not always create a continuous graph, they presented satisfactory results.You cannot expect perfect results because predictions are always subject to error.In this case, it was a difference in the scale not the shape (tendency) of the data in the new cycles calculated.

Discussion and Conclusions
Currently, many enterprises have decided to improve their market standing by modernizing their existing equipment or searching for new operating parameters.The machines used in foundries also need such changes, so various types of ready-made and specially created software are tested.The described case of a device for testing the thermal expansion of metal samples and the proposed implementation of its digital twin only confirms that a gain in time and energy is possible.Instead of many minutes being wasted when the device is idle or the next cycle is interrupted, you can find out about the probable situation in a few seconds, and reach a decision that will make the process more automated.This is due to the latest technologies, the development of which has significantly accelerated in recent years.Using complex algorithms, precise sensors and high computing power, humans are able to improve the efficiency of even simple classic industrial tools.After analyzing the requirements of the application, which were fully met, the created system uses several programming languages and many libraries containing useful code, and also provides a universal interface, creating a multifunctional program that is easy to expand with new purposes in future, which will be suggested in more detail in the next body of work.The artificial neural network made predictions that thus far have been made using classic time-series analysis methods, and its results turned Even though the generated data combined with the provided data did not always create a continuous graph, they presented satisfactory results.You cannot expect perfect results because predictions are always subject to error.In this case, it was a difference in the scale not the shape (tendency) of the data in the new cycles calculated.

Discussion and Conclusions
Currently, many enterprises have decided to improve their market standing by modernizing their existing equipment or searching for new operating parameters.The machines used in foundries also need such changes, so various types of ready-made and specially created software are tested.The described case of a device for testing the thermal expansion of metal samples and the proposed implementation of its digital twin only confirms that a gain in time and energy is possible.Instead of many minutes being wasted when the device is idle or the next cycle is interrupted, you can find out about the probable situation in a few seconds, and reach a decision that will make the process more automated.This is due to the latest technologies, the development of which has significantly accelerated in recent years.Using complex algorithms, precise sensors and high computing power, humans are able to improve the efficiency of even simple classic industrial tools.After analyzing the requirements of the application, which were fully met, the created system uses several programming languages and many libraries containing useful code, and also provides a universal interface, creating a multifunctional program that is easy to expand with new purposes in future, which will be suggested in more detail in the next body of work.The artificial neural network made predictions that thus far have been made using classic time-series analysis methods, and its results turned out to be sufficient and

Figure 1 .
Figure 1.Diagram of the measurement structure (with elements of the power supply s

Figure 1 .
Figure 1.Diagram of the measurement structure (with elements of the power supply subsystem).

•
Temperature [ • C]-current sample temperature, • Resistance [Ω]-sample resistance value in a given second, • Voltage [V]-value of the current measurement at both ends of the sample, • Current [A]-information about the current stage of the cycle (if 330 A, the heating process is maintained; if 10 A, cooling is in progress), • Force [N]-which the sample exerts on the material on the machine arms, • Elongation [mm]-the length by which the sample shrank or expanded in the last second, • Cycle number-iteration of the entire cycle (heating and cooling) current for a given second, • Test time-in seconds, point at which the remaining data were downloaded from the start of the test.
section of data we are interested in in the form of a classic .csvfile, thanks to which the user can read the following information:  Temperature [°C]-current sample temperature,  Resistance [Ω]-sample resistance value in a given second,  Voltage [V]-value of the current measurement at both ends of the sample,  Current [A]-information about the current stage of the cycle (if 330 A, the heating process is maintained; if 10 A, cooling is in progress),  Force [N]-which the sample exerts on the material on the machine arms,  Elongation [mm]-the length by which the sample shrank or expanded in the last second,  Cycle number-iteration of the entire cycle (heating and cooling) current for a given second,  Test time-in seconds, point at which the remaining data were downloaded from the start of the test.

Figure 2 .
Figure 2. Columns containing information on the tested sample with a visible moment of rupture and change in the appearance of the graph just before it.

Figure 2 .
Figure 2. Columns containing information on the tested sample with a visible moment of rupture and change in the appearance of the graph just before it.

Figure 3 .
Figure 3. UML deployment diagram of the created application.

Figure 3 .
Figure 3. UML deployment diagram of the created application.

Figure 4 .
Figure 4. Comparison of the change in learning errors during subsequent training of a model with the same parameters.

Figure 4 .
Figure 4. Comparison of the change in learning errors during subsequent training of a model with the same parameters.

Figure 5 .
Figure 5. Predicted values produced by model A on text files PT1 and PT2.

Figure 5 .
Figure 5. Predicted values produced by model A on text files PT1 and PT2.

Figure 6 .
Figure 6.Predicted values produced by model B on test files PT1 and PT2.

Figure 7 .
Figure 7. Predicted values produced by model C on test files PT1 and PT2.

Figure 6 .
Figure 6.Predicted values produced by model B on test files PT1 and PT2.

Electronics 2024 , 12 Figure 6 .
Figure 6.Predicted values produced by model B on test files PT1 and PT2.

Figure 7 .
Figure 7. Predicted values produced by model C on test files PT1 and PT2.

Figure 7 .
Figure 7. Predicted values produced by model C on test files PT1 and PT2.

Table 1 .
A summary of the models used when testing the prediction capabilities implemented in the digital twin.

Table 1 .
A summary of the models used when testing the prediction capabilities implemented in the digital twin.

Table 2 .
A summary of data files used when testing the prediction capability implemented in the digital twin.

Table 2 .
A summary of data files used when testing the prediction capability implemented in the digital twin.