Neural Network in the Analysis of the MR Signal as an Image Segmentation Tool for the Determination of T1 and T2 Relaxation Times with Application to Cancer Cell Culture

Artificial intelligence has been entering medical research. Today, manufacturers of diagnostic instruments are including algorithms based on neural networks. Neural networks are quickly entering all branches of medical research and beyond. Analyzing the PubMed database from the last 5 years (2017 to 2021), we see that the number of responses to the query “neural network in medicine” exceeds 10,500 papers. Deep learning algorithms are of particular importance in oncology. This paper presents the use of neural networks to analyze the magnetic resonance imaging (MRI) images used to determine MRI relaxometry of the samples. Relaxometry is becoming an increasingly common tool in diagnostics. The aim of this work was to optimize the processing time of DICOM images by using a neural network implemented in the MATLAB package by The MathWorks with the patternnet function. The application of a neural network helps to eliminate spaces in which there are no objects with characteristics matching the phenomenon of longitudinal or transverse MRI relaxation. The result of this work is the elimination of aerated spaces in MRI images. The whole algorithm was implemented as an application in the MATLAB package.


Introduction
Artificial neural networks have a number of possibilities that are worth exploiting in medical research. From a practical point of view, neural networks used in magnetic resonance imaging (MRI) research are based on operations on vectors and matrices. In particular, herein we used the MATLAB package by The MathWorks for matrices calculations. The mechanisms focused on matrices are among the most efficient in neural networks.
It should be noted that MRI hardware solutions involve the implementation of neural networks to integrated circuits. These solutions, although very efficient, are not suitable for research studies such as the present one. However, they will work well in targeted and optimized applications.
MRI is a leading image modality in daily clinical research. The only concern is the description of the obtained images. There is a shortage of radiologists, so the waiting time for a description can be extended. When an experienced radiologist is required for each MRI diagnosis, the assessment of neural network can relieve specialists and become routine tests [1][2][3].
Artificial intelligence (AI) presents tremendous progress in the field of MRI. Promising approaches include deep learning methods for reconstructing MRI data and generating high-resolution data from low-resolution data. Preliminary studies show that the state-ofthe-art technique can generalize many different anatomical areas and achieve comparable diagnostic accuracy with conventional methods. This article discusses state-of-the-art methods, clinical application considerations, and then future prospects in the MRI field.
MRI systems are devices containing analog-digital paths that process signals with low or very low values. They have a number of blocks corresponding to generation, signal amplification, or its A/C processing. The input stages of the coils reject and transmit signals at very low levels. The high level of automation of the processing systems means that the amplifying stages can perform switching of the signal amplification stage in a completely autonomous manner. This state of affairs means that the use of standard research protocols may result in different values of signal intensity in individual sequences. In T 1 and T 2 measurements, the signal intensity is taken as the basic factor for creating graphs and approximating measurement points. Elimination of such equipment errors is difficult and requires excellent knowledge of the operation of the systems, support provided by the device manufacturer may also be helpful.
A sample placed in a magnetic field has a magnetization value defined as M0. The rotation of a proton can be compared to the rotation of a gyroscope. The frequency of these rotations depends on the value of the magnetic field and the previously mentioned gyromagnetic coefficient. For 1.5 Tesla induction, the resonant frequency is 63.86621838 MHz, while for 3 Tesla is 127.73243676 MHz. The equation describing the dependence of the resonant frequency on the value of the magnetic field is as follows: As a result of the interaction between the electromagnetic wave with resonant frequency, the magnetization vector deviates from the position parallel to the field lines B0, and the value of this deviation depends on the intensity of this wave. After its action ceases, the equilibrium is restored-the time needed to return to the steady state before excitation is called the relaxation time. It should be added that relaxation is longitudinal T 1 and transverse T 2 time ( Figure 1).
The relaxation time T 1 is the time needed for the longitudinal magnetization in the Z axis to return to 63% of its original state. This relaxation is referred to as a spin-lattice. The reason is that in the process of return energy is transferred to the environment. The speed of the process depends on the force with which the protons interact with the environment and is the faster the more macromolecules there are in the examined tissue. The longest T 1 times occur for tissues with the highest water content and the lowest content of macromolecules [15]. Longitudinal relaxation time depends on, e.g., temperature [16] and viscosity of the environment [17].
The time T 2 is related to the phase of the spins. The phenomenon of phase coherence decay depends on the tested object itself. It also depends on the parameters of the magnetic resonance system itself. In particular, the parameter is affected by the uniformity of the B 0 field. The imperfection of the homogeneity of the field causes the individual nuclei to be in different magnetic fields. These are very small differences, but enough for the protons to have different resonant frequencies. This leads to dephasing of the system and decay of the transverse component of the magnetic field. Another reason for the disappearance of the transverse component is the properties of the sample itself. For these reasons, transverse relaxation is referred to as spin-spin relaxation.
pearance of the transverse component is the properties of the sample itself. For these reasons, transverse relaxation is referred to as spin-spin relaxation.
The signal that is received in the MRI system can be described by the relationship: where: SI-signal intensity, PD-tissue proton density, T1-longitudinal tissue relaxation time, T2-transverse tissue relaxation time, TE-echo time, TR-repetition time. , it is a graph of SI as a function of repetition time TR, while for time T2, the graph presents SI as a function of echo time TE. The presented graphs are digitally generated waveforms to illustrate the phenomena themselves and the relationships between them. In the drawings, the points at which T1 and T2 were determined are marked with a triangle sign, i.e., these are the times for which SI is 63% and 37% of the maximum value of the signal, respectively.
Equation (2) describes in a relatively simple way the intensity of the signal that is received by the receiving coils. Analyzing this relationship, two of the parameters contained in it depend on the operator and the system capabilities (TE, TR), and three are purely material relationships that are the property of the tested object itself. Medical imaging in T1-weighted and T2-weighted images consists in recording the signal at fixed echo and repetition times. This includes choosing their values allows one to show the assumed cross-section of the object in a better or worse way.
Equation (2) also allows to determine the relaxation times of the tested objects. Analyzing it, we can see that by selecting the appropriate values of the TE and TR parameters, we can eliminate the dependence with respect to T1 or T2 from the equation, and thus from the picture. This leads, in a simple way, to determining the repetition times.
In order to determine the T1 time, the echo time value should be set as low as possible. Then, the term exp((−TE)/T2) will be close to "1", and Equation (1) can be written as Figure 1. The graphs show the shape of the signal intensity for (a) longitudinal relaxation time T 1 and (b) transverse relaxation time T 2 . For time T 1 , it is a graph of SI as a function of repetition time TR, while for time T 2 , the graph presents SI as a function of echo time TE. The presented graphs are digitally generated waveforms to illustrate the phenomena themselves and the relationships between them. In the drawings, the points at which T 1 and T 2 were determined are marked with a triangle sign, i.e., these are the times for which SI is 63% and 37% of the maximum value of the signal, respectively.
The signal that is received in the MRI system can be described by the relationship: where: SI-signal intensity, PD-tissue proton density, T 1 -longitudinal tissue relaxation time, T 2 -transverse tissue relaxation time, TE-echo time, TR-repetition time.
Equation (2) describes in a relatively simple way the intensity of the signal that is received by the receiving coils. Analyzing this relationship, two of the parameters contained in it depend on the operator and the system capabilities (TE, TR), and three are purely material relationships that are the property of the tested object itself. Medical imaging in T 1 -weighted and T 2 -weighted images consists in recording the signal at fixed echo and repetition times. This includes choosing their values allows one to show the assumed cross-section of the object in a better or worse way.
Equation (2) also allows to determine the relaxation times of the tested objects. Analyzing it, we can see that by selecting the appropriate values of the TE and TR parameters, we can eliminate the dependence with respect to T 1 or T 2 from the equation, and thus from the picture. This leads, in a simple way, to determining the repetition times.
In order to determine the T 1 time, the echo time value should be set as low as possible. Then, the term exp((−TE)/T 2 ) will be close to "1", and Equation (1) can be written as For the search of accuracy, the term containing the T 2 time will be a constant value slightly less than 1 and it will somehow scale the SI value. In order to determine the T1 time, the tested layer should be acquired at different TR times, starting from small values up to the maximum possible to be set in the camera. It should be added here that clinical scanners have a TR value that can be set at 15,000 ms. In practice, data should be collected for a maximum TR time equal to or greater than 5 times the value of the expected T 1 time. Reducing the maximum TR time will increase the error associated with the determination of T 1 time.
Then, on the SI(TR) chart, determine for what time the SI reaches the value of 63%. This value corresponds to the point where TR = T1: In a similar way, we can present the possibility of determining the transverse relaxation time-T 2 .
Assuming then TR to be as large as possible, the term exp((−TR)/T 1 ) will be eliminated-for long TR times, it will assume a value close to "0".
Also in this case, the determination of the T 2 time consists in determining the point on the time axis-TE, at which the IS decreases to the value of 37%. Equation (5) will take this value only when TE = T 2 , i.e., the exponent is "−1".
For the accuracy, it should be added that it is also possible to record images related to proton density-PD. Then, TR should be set as a long time-many times longer than T 1 and TE as a short time-many times shorter than T 2 . Then, the T 1 -dependent term will be zeroed, while the T 2 -dependent term will slightly influence the PD intensity. A large number of measurement points is recommended to minimize approximation errors. However, this extends the test time.
Artificial intelligence is increasingly involved in creating an image based on numerical data. This requires powerful applications as well as the computer systems themselves. Neural networks in medicine can be found in the vast majority of specialties. One of the first places seems to be attributed to imaging diagnostics.
This work presents the use of a neural network for MRI data analysis, on the basis of the distribution of times relaxation times T 1 and T 2 . This article does not describe the classic approach to image segmentation based on an algorithm related to the pixel intensity in a single image.
The aim of the work was to present the use of neural networks for the analysis of data obtained during the MRI examination. The application presented here will not serve the traditionally understood segmentation, but an approach to the analysis of data from the measurements of T 1 and T 2 relaxation times by the method of saturation recovery (SR) will be shown. Sequences of images and their relationship to each other will be analyzed, and not, as in the traditional image recognition technique, the analysis of pixels adjacent to one plane.
In determining the relaxation time, from a few to a dozen or so DICOM 3.0 images are taken into account. These images are different as a result of changing the repetition time (TR) and echo (TE) parameters.
Artificial neural networks, through their properties, can learn, and thus have a number of possibilities that are worth using. From a practical point of view in technology, neural networks are for the most part a creation based on operations on vectors and matrices. The implemented mechanisms focused on working with matrices are among the most efficient.
It should be noted that there are also hardware solutions involving the implementation of neural networks to integrated circuits. However, these solutions, although very efficient, are not suitable for research work such as the present one. They will work well in targeted, optimized, and reproduction applications.
The basic element of any neural network is a single neuron. Figure 2 shows schematic diagram. Symbols: x1... xn-network inputs, w1...wn-weights of individual inputs, y-output.
For clarification purposes, each input has its own "amplification factor" called weight by which it is multiplied. In general, the output signal can be described by the relationship: (c) graph of the unipolar sigmoidal function response of the neuron-the values accepted by the activation function are in the range <0.1>, the function of f(x) is known as "logistic growth function' and β represents the "logic growth rate"; (d) graph of the neuron's bipolar response-the values of the activation function are in the range < -1.1>. In (b,c), the slope of the curve depends on the parameter B (beta), increasing the value of this parameter brings the graph closer to a step function.
Both of these functions are continuous functions. Figure 3 presents the results of processing the examined phantom with two methods, namely without the use of neural networks and with their use. The image shows differences in the aerated area, because in this part the neural analysis showed noise, so the pixels forming this area were automatically assigned the value "0". For calculations, "neural networks qualified" only the areas of the phantom. The expression which is the argument of the function f is a sum signal. It amounts to:

Results
It is a step function, which takes only two values: 0 and 1. Other continuous activation functions are functions sigmoidal unipolar: and bipolar: The value of the parameter β is selected by the user and affects the shape of the activation function. Increasing the value of this parameter increases the steepness of the function. For β→∞ these functions become step functions.
A multilayer network was used in this work. It is a variant of neural networks containing, in addition to the input and output layers, hidden layers of neurons.
This work describes the application of a unidirectional, multilayer neural network with a sigmoidal activation function. The flow of information in such a network is one-wayfrom input to output. The mathematical description of such a network is relatively simple. Similarly, the learning methods are uncomplicated. This type of neural network is most often trained in the "with a teacher" mode. In the learning mode, it is therefore necessary to provide the inputs of the network with a training set and the required response. This is also the case with this work. The training set is a sequence of slices of medical images saved in the DICOM 3.0 standard from two regions. The first region is the sample region, while the second region is the aerated noise region. The first region as a response has a value of "1" for each pixel, while the response for noise is a value of "0". The sequence for training the neural network comes from the same image that will be segmented. The use of such a method allows the training to take into account the noise inherent in the MRI image and coming from the currently used system and sample. This approach to learning is not often used; however, the segmentation process gives very good results, which, combined with the short time necessary for learning, is optimal. The network learning operation for a sample containing 200 training vectors takes 0.5 s. This time is obtained in a PC system based on an i5 processor, 16 GB RAM, 3.4 GHz, 500 GB SSD. Figure 3 presents the results of processing the examined phantom with two methods, namely without the use of neural networks and with their use. The image shows differences in the aerated area, because in this part the neural analysis showed noise, so the pixels forming this area were automatically assigned the value "0". For calculations, "neural networks qualified" only the areas of the phantom. Figure 3a,c shows the images of the phantom differ mainly in the aerated area-the place where there is no phantom. This was also shown in two histograms, Figure 10e,f. Comparing the values in the histograms, the lack of noise is clearly visible. However, it seems more important to reduce the height of individual peaks. This is due to the fact that their height is influenced by pixels from the space outside the phantom. This, in turn, has a positive effect on the credibility of the results when quantifying based on the analysis of histograms. In addition, the areas bordering the phantom and the test tube, or the test tube itself, were cut out of the analysis area.  Comparing the values in the histograms, the lack of noise is clearly visible. However, it seems more important to reduce the height of individual peaks. This is due to the fact that their height is influenced by pixels from the space outside the phantom. This, in turn, has a positive effect on the credibility of the results when quantifying based on the analysis of Figure 3. Distribution of T 1 relaxation time values for the tested phantom, the color of each pixel corresponds to the T 1 value that was measured for a specific pixel in a series of DICOM images: (a) the distribution of T 1 times before the analysis with the neural network; (b) the distribution of fit coefficient R 2 values before the neural network analysis; (c) the distribution of T 1 times after the analysis with the neural network; (e) histograms of T 1 times from images before the analysis with the neural network; (f) histograms of T 1 times from images after the analysis with the neural network.

Results
Also analyzing the time needed to perform the computational task, it should be stated that it was significantly shortened, compared to the time needed to perform the calculations without the use of neural network analysis. Although the results related to the processing time analysis can be considered biased due to the size of the analyzed images, the computer system used and the imperfections of the algorithm, they show the potential of advanced analysis using neural networks. The analysis of images without the neural network took about 35 min, while omitting the pixels that are noise using the neural network allowed to reduce the time to less than 6 min. These times are sometimes relatively long. Today's sophisticated algorithms on specialized workstations in combination with appropriate data acquisition methods allow for making such calculations in much shorter times. Nevertheless, the use of a neural network allows to improve the situation in this respect. It should be added that this ratio is fully dependent on the ratio of airspace pixels to phantom space pixels.
In this case, the preparation of the data and training of the neural network takes less than 5 s, so it can be omitted in the estimation of the time reduction. Figure 4 shows an image analysis of the transverse relaxation time T 2 mapping. As in the previous case, there was a significant improvement in the quality of the image obtained.
Repeated attempts to choose the place from which the samples for training the neural network are taken have shown that it is advisable that these are areas with short T 2 times. R 2 is the coefficient of determination in statistics to assess the quality of analysis of image parameters.
Each of the T 1 or T 2 mapping images consists of two parts. The first is an image that is a map of the distribution of relaxation times and the second is an image of the distribution of the R 2 coefficient. The R 2 coefficient is a measure of the quality of the model fit-in this case, a measure of the fit of the approximating function to the dataset. Each set of data in question is the signal intensity values for a given pixel read from all DICOM images. When analyzing the R 2 image, it should be assumed that its selected pixel represents a measure of the quality of matching to the corresponding pixel in the images being the T 1 and T 2 maps. Thus, these images have the advantage over a single numerical value that they show exactly where the approximation curve is the best fit to the data and where it is worse. The data analysis proposed in this article consists in displaying statistics in relation to individual pixels of the image. This is essential in a situation where images containing different tissues that do not have the same T 1 or T 2 times will be analyzed. Determining then any statistics, e.g., the average or standard deviation from a specific image surface, will be a substantive error because the value of the statistics will be influenced by different values resulting from the properties of the tissues and not the acquisition process, noise, or other interfering factors.
The value of the coefficient of determination is obtained as a result of the approximation functions implemented in MATLAB. It is defined: where: SSE is the sum of the squares of the errors, SSR is the sum of the squares of the regression, SST is the sum of the total squares, y i is i-th observation of variable y,ŷ i is theoretical value of the variable explained on the basis of the model and y is the arithmetic mean of the empirical values of the dependent variable.
For accuracy, it should be added that the coefficient of determination itself may take values close to "1", even with a poorly selected model. In the case of this article, there is no such danger because the mathematical description of the phenomenon is known and has been implemented as an approximation function. Figure 5 is a practical example of neural networks. The figures below show the result of T 1 mapping of MCF-7 breast cancer cell cultures. These cells are located in the lower part of the Eppendorf and are clearly demarcated from the culture fluid (for clarification, it should be added that the images have been "turned" to a vertical position). Details related to the culture of these cells are included in the paper [18]. Noise reduction is clearly visible. The signal from the test tubes, which is an artifact, was also largely eliminated. Figure 5a shows the cell mapping results. In the aerated space, the places where the approximation algorithm calculated the longitudinal relaxation time T1 were marked in red. The specified value is well above 4,000 ms. Such a state of affairs is impossible; however, the series of signal intensity vs. time TR data together with noise resulted in an incorrect determination of the airspace relaxation time. The neural network trained on the Noise reduction is clearly visible. The signal from the test tubes, which is an artifact, was also largely eliminated. Figure 5a shows the cell mapping results. In the aerated space, the places where the approximation algorithm calculated the longitudinal relaxation time T 1 were marked in red. The specified value is well above 4000 ms. Such a state of affairs is impossible; however, the series of signal intensity vs. time TR data together with noise resulted in an incorrect determination of the airspace relaxation time. The neural network trained on the image successfully rejected these data as incorrect. The data showing the edge of the test tubes-red color on the edge of the test object-were also rejected. The neural network, while segmenting the data, also rejected the upper part of the culture fluid in which the cells were immersed. The fluid-air interface has a distorted relaxation time estimation and is an artifact that has been successfully removed. The phenomenon of partial volume and its impact on the obtained images has been discussed in [19]. There, on the example of a deliberately phantom, changes in the T 1 time values were shown when the 7 mm wide layer covered by the measurement contained both water and gel. The water-gel ratio as a function of the layer length was variable, which allowed to visualize the effect of partial volume. Figure 6 presents the results of the analysis of T 2 with neural networks as well as without them for the MCF-7 cell culture. As with the T 1 , there was a reduction in artifacts and noise. The change in the image scale is caused by a different choice of ROI for analysis. Changing the image scale does not adversely affect the visualization of the neural network.   Figure 7 shows the response of the neural network. In the images, black color shows "0", i.e., places that artificial intelligence algorithms considered noise. The white color represents places that have been classified as "1", i.e., objects that are phantoms or cell cultures.  Figure 7 shows the response of the neural network. In the images, black color shows "0", i.e., places that artificial intelligence algorithms considered noise. The white color represents places that have been classified as "1", i.e., objects that are phantoms or cell cultures.  Figure 7. (a) presents the "response" of the neural network that was obtained as a result of the analysis of DICOM image sets for the T1 longitudinal relaxation time in phantom consisting of 5 test tubes; (b) presents the "response" of the neural network that was obtained as a result of the analysis of DICOM image sets for the T1 longitudinal relaxation time in MCF-7 cell culture. The task of the neural network was to classify the data into two categories. The first is the category of the area whose MR characteristics correspond to the resonance characteristics of magnetic resonance phenomena-white area, while the second is the area whose behavior does not correspond to MR phenomena-black area. On the basis of this answer, the aerated areas were not analyzed, which allowed to reduce the time needed for data processing and thus to optimize it.

Discussion
A neural network is a computing system with interconnected nodes that work much like neurons in the human brain. Using algorithms, this network can recognize hidden patterns and correlations in raw data, cluster and classify it, and continuously learn and improve [18]. Recently, neural networks are becoming common tool in the hands of engineers supporting the medical fields. It is known that neural networks exceed quantitative and qualitative assessments of imaging. The processing of images with neural networks has become useful in medical diagnostics [19]. Some examples include colon examination in patients with a potential risk of colorectal cancer [20]. The risk of osteoporosis was predicted with great accuracy using AI [21]. The detection of breast asymmetry and classification of calcifications in breast was investigated with neural network [22]. Other studies [23][24][25][26][27] have found neural network useful in mammography screening. MRI with together with the neural network was used in the analysis of the liver [28], myocardium [29][30][31][32], and breast [33]. The possibility to use the neural network for MRI data is helpful tool in medical research. Thus, in clinical concepts, it can be useful to recognize the margin, texture, speculation, and lobulation of the MRI image. Quantitative and qualitative evolution of MRI images with the use of neural network can help in discovering new imaging biomarkers. The algorithms of neural network are based on training possibility to perform the multiple trials. The proposed scheme of operation allows for the variability in the characteristics of imaging systems. Interpretability of neural network approaches for MRI is important for clinical trust and for troubleshooting systems [34]. In the literature, work can be found that uses the concept of artificially generated negative data to form decision using a multilayer perceptron [35]. A few models of neural networks are known, e.g., Convolutional Neural Networks (CNNs) [36,37], Recurrent Neural Networks (RNNs) [38], Generative Adversarial Networks (GANs) [39], and Quantitative Susceptibility Mapping (QSM) [40,41]. Particularly, CNNs were most widely used for MRI and other image processing applications. MRI data are typically input as two dimensional Figure 7. (a) presents the "response" of the neural network that was obtained as a result of the analysis of DICOM image sets for the T 1 longitudinal relaxation time in phantom consisting of 5 test tubes; (b) presents the "response" of the neural network that was obtained as a result of the analysis of DICOM image sets for the T 1 longitudinal relaxation time in MCF-7 cell culture. The task of the neural network was to classify the data into two categories. The first is the category of the area whose MR characteristics correspond to the resonance characteristics of magnetic resonance phenomena-white area, while the second is the area whose behavior does not correspond to MR phenomena-black area. On the basis of this answer, the aerated areas were not analyzed, which allowed to reduce the time needed for data processing and thus to optimize it.

Discussion
A neural network is a computing system with interconnected nodes that work much like neurons in the human brain. Using algorithms, this network can recognize hidden patterns and correlations in raw data, cluster and classify it, and continuously learn and improve [18]. Recently, neural networks are becoming common tool in the hands of engineers supporting the medical fields. It is known that neural networks exceed quantitative and qualitative assessments of imaging. The processing of images with neural networks has become useful in medical diagnostics [19]. Some examples include colon examination in patients with a potential risk of colorectal cancer [20]. The risk of osteoporosis was predicted with great accuracy using AI [21]. The detection of breast asymmetry and classification of calcifications in breast was investigated with neural network [22]. Other studies [23][24][25][26][27] have found neural network useful in mammography screening. MRI with together with the neural network was used in the analysis of the liver [28], myocardium [29][30][31][32], and breast [33]. The possibility to use the neural network for MRI data is helpful tool in medical research. Thus, in clinical concepts, it can be useful to recognize the margin, texture, speculation, and lobulation of the MRI image. Quantitative and qualitative evolution of MRI images with the use of neural network can help in discovering new imaging biomarkers. The algorithms of neural network are based on training possibility to perform the multiple trials. The proposed scheme of operation allows for the variability in the characteristics of imaging systems. Interpretability of neural network approaches for MRI is important for clinical trust and for troubleshooting systems [34]. In the literature, work can be found that uses the concept of artificially generated negative data to form decision using a multilayer perceptron [35]. A few models of neural networks are known, e.g., Convolutional Neural Networks (CNNs) [36,37], Recurrent Neural Networks (RNNs) [38], Generative Adversarial Networks (GANs) [39], and Quantitative Susceptibility Mapping (QSM) [40,41]. Particularly, CNNs were most widely used for MRI and other image processing applications. MRI data are typically input as two dimensional (2D) or three dimensional (3D) and procced the arrays of pixels with neural network [42]. Moreover, deep learning methods were used in a model coupled with digital mammographic imaging to evaluate the classification between breast density categories. The next example such as RNNs contain data feedback loops. This feedback serves as a type of "memory" allowing them to use recent outputs as updated inputs for subsequent calculations. Loss of spatial resolution can be overcome by inserting skip connections between the sides to pass through important details to the output image [29]. GANs consist of two competing components: (1) the Generator, a deconvolutional network that uses random noise and interpolation to generate "fake" but realistic-looking images, and (2) the Discriminator, a conventional CNN previously trained with supervised learning to identify real images at a certain level of accuracy. Moreover, QSM is a growing field of research in MRI, aiming to noninvasively estimate the magnetic susceptibility of biological tissue. Mapping the signals back to known tissue parameters (T 1 , T 2 , and proton density) is then a rather difficult inverse problem [41,43]. Estimation of noise and image denoising in MRI has been an important field of research for many years, employing a plethora of methods.
The latest studies have shown that deep learning and radiomics based on hepatic CT and MR imaging have potential application value in the diagnosis, treatment evaluation, and prognosis prediction of common liver diseases [44,45].

Methods
The algorithm (Figure 8) describes the sequence of operations necessary to apply neural networks in the analysis of the MRI signal as an image segmentation tool for the determination of T 1 and T 2 relaxation times with application to cancer cell culture. The first step is the preparation of cell culture samples. In this case, it was MCF-7 breast cancer cells.
The next step was to scan them in the MR OPTIMA 360 1.5 Tesla system. The scanning included the acquisition of data necessary to develop images that are maps of the distribution of T 1 and T 2 relaxation times. The maps made it possible to proceed to the next step, namely the calculation of the appropriate relaxation times for each pixel of the image, both transverse and longitudinal.
In order to develop the optimization method, a neural network was created in MAT-LAB, whose task was to analyze the data collected earlier. Each of them also requires a training process. It was the next step in action. It was based on data from the images that were to be analyzed. It was here that it became necessary with human participation to determine which part of the image is noise and which is not. On this basis, the neural network algorithms trained the network. The neural network was matched to the data acquired during the acquisition. The number of inputs was equal to the number of images making up the number of measurement points taken into account in the approximation process. The output of the network was one neuron with a unipolar response function. After the training was completed, the raw data were processed using the artificial intelligence method. This pattern recognition method classified raw pixel-by-pixel DICOM images into two categories. One of them, noise, was assigned a value close to "0" and the other, a non-noise value, close to "1".
Based on the classification, the data were re-analyzed to determine the time gain. As a result of a series of tests carried out in the analyzed space, it was possible to reduce the time from approximately 35 min for full analysis up to approximately 6 min using artificial intelligence methods. Although it should be added that this result may seem biased and that this time depends on the ratio of the aerated area and the area depicting artifacts to the area corresponding to the areas of the phantom or cell culture, it must be said without doubt that the neural network allowed to significantly reduce the time necessary for calculations. The work was carried out using a PC class system equipped with an i5 processor, 16 GB RAM, 250GB SSD. For the tests, the authors deliberately chose a solution based on a middle-class system to show that the use of methods related to neural networks allows for significant optimization of work. The use of faster systems, in particular those equipped with graphics cards with CUDA (Compute Unified Device Architecture) options, would allow for further improvement in this area; however, the idea of this work was to show the possibilities related to neural networks in average conditions. The final stage in the algorithm was the presentation of data both in the form of maps of the T 1 and T 2 distributions as well as histograms. It is the frequency analysis shown in the histogram charts that says the most about the qualitative gain of the work. Note that the height of the histogram is affected by pixels regardless of their position in the image. So the height of the graphs is also modified by pixels in the noise-aerated area. The network analysis removed the noise and thus limited its influence on the histogram.
For the experiment, four test tubes (No. 1-4) containing an aqueous CuSO 4 solution of various concentrations and one sample of distilled water were prepared.
Test tube No. 1 contained 10 mL of distilled water and 0.1079 g of copper sulphate with a molar mass of 159.609 g/mol was added. Each subsequent test tube was prepared in the same way, namely 1 mL of the solution was taken from the previous test tube and supplemented with 9 mL of distilled water. These four solutions were significantly different in concentration to vary the T 1 and T 2 times. The T 1 measured with FSE sequence was used in the study. The TE time was 20 ms, while the TR times were, respectively: 40, 50, 60, 78, 80, 100, 120, 140, 200, 240, 300, 400, 500, 600, 700, 800, 1000, 1500, 2000, 3000, 5000, 10,000, 15,000 ms [18]. The increased number of measurement points is necessary to determine a significant range of T 1 times that occurred in the study. Their large amounts have a positive effect on the process of approximation of the obtained results. In this case, there is no need to minimize the examination time, as is the case in the examination of patients.
The materials for this article were images saved in the medical DICOM 3.0 standard. They came from the phantom study described above. Every neural network requires a learning process. This process takes place with the use of data whose nature is known. In this case, data were collected from two areas of the image. The first area is the facility area, while the second is the aerated area. For each series, data were collected from this scan and from the same location. This is shown in Figure 9a where sample acquisition sites are marked with colors, while in Figure 9c, acquisition areas are marked with large numbers. Shown in Figure 9c, the areas numbered from 1 to 50 are the places from which the data for the graph in Figure 9b came. Analyzing these data, it can be seen that the curves coming from individual places, which are images of the phantom, differ in shape. Basically, we are still dealing with an exponential function; however, it differs depending on the value of time T 1 . Moreover, the maximum SI values from individual tubes are characterized by high variability. Waveforms originating from the aerated area are located close to the horizontal axis. Noteworthy is the fact that individual test tubes differ in maximum intensity.  . For the sake of accuracy, it should be added that the graphs of the noise values are located close to the horizontal axis, which makes them slightly emphasized in the scale of the graph.
The task of the neural network was to segment the image in such a way that only the area of the tested object remained for calculations. Downloaded data are slices of images with a side of 10 pixels. As a result of this operation, two sets of data were obtained, which from the point of view of the MATLAB program, were matrices with dimensions [x,y,z], where x and y are the dimensions of the sample, while z is the number of analyzed series that were made during the study to determine individual times T1 and T2. Each data The task of the neural network was to segment the image in such a way that only the area of the tested object remained for calculations. Downloaded data are slices of images with a side of 10 pixels. As a result of this operation, two sets of data were obtained, which from the point of view of the MATLAB program, were matrices with dimensions [x,y,z], where x and y are the dimensions of the sample, while z is the number of analyzed series that were made during the study to determine individual times T 1 and T 2 . Each data vector-as it was described earlier-was defined with a response value of "0" for the aerated area or "1" for pixels depicting the test object. Figure 10 presents the structure of the neural network that was created in MATLAB, taking into account the data about the study and the number of acquisition points. In this case, it has 23 inputs, 10 neurons in the hidden layer, and one neuron in the output layer. The activation functions for the hidden neurons and the output neuron are bipolar functions and unipolar functions, respectively. vector-as it was described earlier-was defined with a response value of "0" for the aerated area or "1" for pixels depicting the test object. Figure 10 presents the structure of the neural network that was created in MATLAB, taking into account the data about the study and the number of acquisition points. In this case, it has 23 inputs, 10 neurons in the hidden layer, and one neuron in the output layer. The activation functions for the hidden neurons and the output neuron are bipolar functions and unipolar functions, respectively.