Abstract
Real-time and accurate identification of the blast furnace (BF) condition is essential for maintaining stability and improving energy efficiency in steelmaking. However, the harsh environment inside the BF makes direct acquisition of the BF condition extremely difficult. To address this challenge, this study proposes an online BF condition recognition method based on spatiotemporal texture feature coupling and diffusion networks (STFC-DN). The method employs a multi-domain Swin-Transformer module (MDSTM) combined with wavelet decomposition and channel attention to extract the gas flow region. A temporal feature pyramid network module (T-FPNM) is then used to capture both the global and local spatiotemporal characteristics of this region. Heuristic clustering and an idempotent generative network (IGN) are introduced to obtain standardized BF condition features, enabling intelligent classification through multi-metric similarity analysis. Experimental results show that the proposed STFC-DN achieves an average accuracy exceeding 98% when identifying four BF conditions: normal, hanging, oblique stockline, and collapsing, with an inference speed of approximately 28 FPS. This approach demonstrates both high accuracy and real-time capability, showing strong potential for advancing the intelligent and sustainable development of the steel industry.
1. Introduction
The blast furnace (BF) is the core facility in the steelmaking process; its operation consumes substantial energy and directly determines steel quality [1,2]. BF conditions serve as key indicators of BF operational stability and smelting efficiency [3,4]. Therefore, precise and real-time monitoring of BF conditions is critically important [5,6]. However, the high-temperature, high-pressure, sealed, and dusty environment inside the BF renders direct acquisition of BF condition information highly challenging [7]. Although the high-temperature industrial endoscope developed by Chen, Yi et al. [8,9,10], and colleagues enables real-time acquisition of BF videos, the absence of auxiliary light sources often causes BF burden surface images to exhibit blurred textures and uneven illumination, severely limiting subsequent BF condition analysis [11]. Therefore, there is an urgent need to develop efficient BF condition recognition methods to achieve intelligent control of the BF ironmaking process [12].
Existing studies on BF condition recognition can be broadly categorized into two approaches: mechanism-based modeling and data-driven, multimodal fusion with intelligent prediction [13].
(1) Mechanism-based modeling and data-driven: BF condition recognition is commonly performed under the constraints of mechanistic models, where operational data are integrated and deep learning is employed for modeling. Xu et al. [14] combined industrial endoscopy with a virtual multi-camera array to achieve real-time 3D measurement of the BF burden surface under extreme environments, demonstrating reliable performance for BF condition monitoring and control. Zhou et al. [15] established quantitative criteria for abnormal condition recognition by detecting coke particle size and temperature distribution in the tuyere region. Duan et al. [16] developed a novel deep neural network, ES-SFRNet, which achieves intelligent recognition of BF tuyere conditions by fusing tuyere images with temporal data. Zhao et al. [17] introduced the GT-MSPC model, which leverages multi-objective optimization for abnormal condition detection and early warning. These methods can achieve recognition accuracy above 95% under specific operating conditions. However, their performance relies heavily on sensor precision and the accurate measurement of internal furnace parameters. When the operating conditions become complex, the prediction error of the models exceeds 10%, which limits their applicability in industrial environments.
(2) Multimodal fusion with intelligent prediction: To overcome the limitations of single data sources, some studies have attempted to integrate radar point clouds, visible/infrared images, and temperature data to construct multi-source information-driven discriminative models. Guo et al. [18] proposed a BF condition classification model based on multi-source fusion and Dynamic Process Adaptive Kernel Fisher Discrimination (DPAKF), which achieved high accuracy. Tian et al. [13] employed RGB-D cameras and digital elevation models to measure burden surface roughness and perform state discrimination, enabling comprehensive BF condition recognition. Lee et al. [19] combined hybrid quantum machine learning with pulverized coal injection control to improve the stability of BF temperature prediction. This type of method achieves an average classification accuracy of 98.2% for BF conditions, representing a 25% improvement in prediction accuracy compared with traditional models. However, issues such as high computational complexity, difficulty in cross-modal data alignment, and limited interpretability still remain.
To address these issues, this paper proposes an online BF condition recognition method based on spatiotemporal texture feature coupling and diffusion networks (STFC-DN). The main contributions are as follows:
(1) To overcome the problem of BF blurred burden surface images that obscure critical gas flow regions, a multi-domain Swin-Transformer module (MDSTM) is proposed to enable precise localization and segmentation of these regions.
(2) To handle the challenge that gas flow regions change dynamically over time and are difficult to comprehensively represent using conventional methods, a temporal feature pyramid network module (T-FPNM) is proposed, which integrates spatial features with temporal dynamic features to achieve multi-scale representation of BF conditions.
(3) To reduce the strong dependence on BF condition labels, alleviate the difficulty of manual annotation, and improve classification accuracy, an intelligent recognition method based on feature generation is introduced. This approach effectively enhances recognition accuracy and robustness under complex BF conditions.
The remainder of this paper is organized as follows: Section 2 introduces BF condition classification and BF burden surface image preprocessing. Section 3 elaborates on the proposed recognition method. Section 4 presents the experimental design and analysis of results. Section 5 concludes the paper and outlines future research directions.
2. Classification of BF Conditions and Color Rendering of BF Burden Surface Images
2.1. Classification and Feature Description of BF Conditions
BF conditions are generally classified into two categories: normal conditions and abnormal conditions. Abnormal conditions primarily include typical types such as hanging, oblique stockline, and collapsing. Table 1 summarizes the characteristic features associated with different BF conditions, and Figure 1 illustrates representative image examples.
Table 1.
Classification and Characteristics of BF Conditions.
Figure 1.
Illustrations of BF conditions: (a) normal condition, (b) hanging, (c) oblique stockline, and (d) collapsing.
2.2. Image Enhancement and Color Rendering of BF Burden Surface
The BF burden surface images captured by a high-temperature industrial endoscope, as illustrated in Figure 2, are of poor quality. To address this limitation, STFC-DN incorporates image clarification and color rendering into the preprocessing stage.
Figure 2.
BF burden surface image acquired by high-temperature industrial endoscope: (a–c) BF burden surface images obtained from different batches.
During image clarification, the BF burden surface image is first decomposed into an illumination map and a reflectance map using an encoder–decoder network. Subsequently, gamma correction is applied to the illumination map to improve brightness uniformity, while a bilateral texture filtering-based detail enhancement method [20] is applied to the reflectance map to emphasize the edges and texture features of the gas flow region, as shown in Equation (1).
where BS denotes the BF burden surface image; Illumination and Rflectance represent the decomposed illumination and reflectance components; is the gamma value; and denote the input and enhanced illumination information; refers to the bilateral texture filtering operation; indicates the filter size of the i-th layer; denotes the enhancement function applied to the texture detail image (); and represents the enhanced reflectance information.
During color rendering, regions with varying temperature and brightness distributions are mapped into a pseudo-color space to enhance the distinction of gas flow patterns. The experimental results are presented in Figure 3.
Figure 3.
BF burden surface image after enhancement and pseudo-color rendering: (a,c) clarified BF burden surface images and (b,d) color-rendered BF burden surface images.
3. Methods
STFC-DN first utilizes the MDSTM to precisely extract the gas flow region. Subsequently, the multi-scale texture features and spatial structural features of this region are computed separately and then fused to construct a BF condition representation vector. Finally, an intelligent recognition method based on feature generation is introduced to enable precise identification of the BF condition. The overall framework of the proposed method is illustrated in Figure 4.
Figure 4.
Overall framework of the proposed STFC-DN method.
3.1. Gas Flow Region Extraction Based on Multi-Domain Swin-Transformer Module (MDSTM)
The MDSTM incorporates multi-scale wavelet decomposition and a channel attention mechanism into the Swin-Transformer [21] to enhance detail capture and global structural perception in gas flow regions. The structure of the MDSTM is illustrated in Figure 5.
Figure 5.
Structure of the proposed MDSTM.
First, during the model training stage, a total of 2069 representative blast furnace (BF) burden surface images were selected, with each frame corresponding to one second of the internal furnace state. The dataset included 758 frames of normal BF condition, 547 frames of hanging, 435 frames of oblique stockline, and 329 frames of collapsing. The high-temperature gas flow region in each frame was manually annotated based on three criteria: the regional brightness exceeded the background by more than 25%, the region exhibited plume-like or radially diffused bright textures, and its edge morphology remained stable while changing continuously across adjacent frames. These annotations were used to generate binary supervision labels denoted as . In addition, multi-scale features obtained through two-dimensional wavelet transform were introduced as guiding information. Subsequently, the channel attention weights were calculated according to the correlation within the gas flow region, and the extracted features were weighted as follows:
The feature set was then fed into multiple layers of the multi-domain Swin-Transformer module (MDSTM). The window-based multi-head self-attention mechanism (W-MSA) was employed to capture local feature dependencies, while the shifted window strategy (SW-MSA) was used to achieve information interaction across different windows:
Finally, the output features are mapped to the probability space of the gas flow regions through a fully connected layer (FC), yielding the segmentation results, calculated as follows:
The model was trained in a supervised manner, where the input images and their corresponding labels were used for feature learning. The loss function was defined as a weighted combination of binary cross-entropy loss and dice loss, written as follows:
where, and represent adjustable weighting coefficients.
3.2. Feature Computation of Gas Flow Regions
After the MDSTM precisely extracts the gas flow region from the BF burden surface image, the texture features and spatiotemporal structural characteristics of this region need to be further quantified.
3.2.1. Multi-Scale Texture Features
The multi-scale texture features comprise the perimeter P, area A, brightness B, and sharpness S of the gas flow region, which are combined through weighted fusion, as formulated in Equation (6), written as follows:
where ∼ denotes a learnable weight, which is automatically updated through a Multi-Layer Perceptron (MLP) followed by Softmax normalization:
The fused feature preserves geometric shape information while integrating edge characteristics, thereby providing input that combines both global and local information for BF condition recognition.
3.2.2. Spatiotemporal Structural Feature Extraction Based on Temporal FPN Module (T-FPNM)
The structure of the T-FPNM is illustrated in Figure 6. First, a multi-scale convolutional network is employed to extract the spatial features of the gas flow region frame and to generate multi-level feature maps . Subsequently, feature maps at the same level are stacked into a sequence and fed into a convolutional long short-term memory (ConvLSTM) network to produce feature maps that incorporate temporal information [22]:
where l denotes the the layer index. Finally, a top-down strategy is employed to iteratively fuse features across layers. The feature map is upsampled to match the resolution of and then fused with it via element-wise addition:
Figure 6.
Structure of the proposed T-FPNM.
By propagating down to the bottom layer, the complete multi-scale spatiotemporal feature set is obtained. This set is then combined with the multi-scale texture features extracted by the MDSTM to construct the BF condition representation vector .
3.3. Intelligent BF Condition Recognition Method Based on Feature Generation
The method first obtains BF condition labels through heuristic fuzzy clustering (HFC), then employs an idempotent generative network (IGN) [23] to generate standardized feature representations of BF conditions, and finally combines multi-metric similarity measurements to achieve BF condition classification. The workflow is shown in Figure 7.
Figure 7.
Workflow of the proposed method.
First, the HFC uses the multi-scale texture features of the gas flow region as clustering centers to derive labels for four categories of BF conditions, which are subsequently fed into the IGN network:
where represents noise, denote the generated BF condition features. Subsequently, the BF condition representation vector is compared with the generated features :
where , , and denote the modified cosine similarity, the normalized Mahalanobis distance, and the normalized Euclidean distance, respectively. m represents the feature dimension. The results of the three metrics are integrated using an adaptive weight learning network. Finally, the BF condition category is determined by the maximum discrimination score:
where denotes the integrated discrimination score for BF conditions.
4. Experiments
The experimental data were obtained from independent BF condition videos captured by a high-temperature industrial endoscope in a steel plant. The dataset includes four typical BF conditions: normal, hanging, oblique stockline, and collapsing. A total of 162 video clips were selected, each with a duration of approximately 10 s. Sampling at 1 fps resulted in 2956 image frames, including 1083 frames of normal, 782 of hanging, 621 of oblique stockline, and 470 of collapsing. All data were divided into training, validation, and test sets in a ratio of 7:2:1. Representative samples are shown in Figure 8. All experiments were implemented in Python 3.10.8 on the PyCharm Professional 2023.2 platform (JetBrains s.r.o., Prague, Czech Republic), using an NVIDIA GeForce RTX 3090 GPU (NVIDIA Corporation, Santa Clara, CA, USA) with 24 GB of memory.
Figure 8.
BF Burden surface images captured by a high-temperature industrial endoscope: (a–d) images from different batches.
4.1. Generation of BF Condition Clustering Labels
To reduce reliance on manual annotation, the HFC is employed to automatically generate BF condition labels. The clustering results are illustrated in Figure 9, showing that different BF conditions can be effectively distinguished: normal conditions correspond to three clusters, hanging conditions to four clusters, oblique stockline conditions to five clusters, and collapsing conditions to six clusters. These results validate that the proposed method can automatically uncover potential BF condition structures from image features.
Figure 9.
Clustering results of BF condition labels obtained by HFC. (a) normal condition, (b) hanging, (c) oblique stockline, and (d) collapsing.
4.2. Accuracy and Efficiency in BF Condition Recognition
To evaluate the overall performance of the STFC-DN, a comparative analysis was conducted against several representative classification networks, including ResNet-50 [24], EfficientNet-B3 [25], HRNet-W32 [26], and the Vision Transformer [27], as shown in Table 2. The evaluation metrics included Precision, Recall, F1-score, and average inference speed, as defined in Equation (13). Precision, Recall, and F1-score were computed from the confusion matrix, which was generated by frame-by-frame comparison between model predictions and manual annotations.
Table 2.
Recognition accuracy and inference speed of different models under various BF conditions.
The results indicate that STFC-DN achieved average Precision, Recall, and F1-score values of 98.29%, 97.83%, and 98.16%, respectively, across the four BF conditions, with an average inference speed of 27.79 FPS. These results outperform those of the comparison models and demonstrate that STFC-DN can fully meet the real-time recognition requirements of blast furnace operations.
where , , and represent the numbers of true positive, false positive, and false negative samples, respectively, and N denotes the number of BF condition categories.
4.3. Feature Analysis and Ablation Experiments of High-Temperature Gas Flow Regions
4.3.1. Multi-Scale Feature Computation of High-Temperature Gas Flow
The multi-scale texture features and spatiotemporal structural features of the gas flow region were computed to quantify the dynamic differences among various BF conditions. Table 3 presents the feature distributions under four representative BF conditions. Overall, in the abnormal conditions, the gas flow region exhibits significant deviations in area, brightness, and spatiotemporal characteristics compared with the normal condition. These deviations indicate stronger local instability and more pronounced dynamic fluctuations within the gas flow region.
Table 3.
Comparison of Gas Flow Region Features under Different BF Conditions.
4.3.2. Ablation Experiments on the Extraction of High-Temperature Gas Flow Regions
To assess the effectiveness of multi-domain guidance information in the MDSTM, the wavelet transform module was removed, and the model was subsequently retrained. Figure 10 compares the extraction results of the high-temperature gas flow region with and without the wavelet transform module, while Table 4 reports the corresponding BF condition recognition accuracy.
Figure 10.
Extraction results of the high-temperature gas flow region: (a) original image; (b) extraction with multi-domain guidance information; (c) extraction without multi-domain guidance information.
Table 4.
Comparison of BF condition recognition with and without multi-domain guidance.
It is evident that the recognition accuracy decreases substantially after removing the multi-domain guidance information, with the decline being particularly pronounced under normal conditions. This finding indicates that frequency-domain details are essential for the precise extraction of gas flow regions and further validates the necessity of the multi-domain feature fusion design.
5. Conclusions
This study addresses the challenges of the harsh internal environment of the BF and the difficulty of obtaining accurate BF condition information by proposing the STFC-DN method. The method employs the MDSTM to achieve precise extraction of the gas flow region, integrates the T-FPNM to characterize its dynamic spatiotemporal evolution, and utilizes feature generation together with a multi-metric discrimination mechanism to accomplish intelligent classification and online recognition of BF conditions. The proposed model can be integrated with the BF process control system to dynamically adjust key operational parameters such as burden distribution eccentricity, tuyere air volume, and pulverized coal injection rate based on recognition results. This enables early warning and prevention of abnormal BF conditions, thereby avoiding interruptions in furnace operation.
However, the performance of STFC-DN remains sensitive to the quality of BF burden surface images and illumination conditions during data acquisition. Its inference speed is limited on devices with low computational capability, and its generalization performance across different furnace types and operating conditions still requires further validation. Future work will focus on model lightweighting, robustness enhancement, and multimodal data fusion to improve the system’s long-term stability and adaptability in complex industrial environments.
Author Contributions
X.J., conceptualization, methodology, and writing—original draft preparation; J.H. (Jie Han), software, data curation, and validation; J.H. (Jianjun He), formal analysis and writing—review and editing; W.G., supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding
Supported by the National Natural Science Foundation of China (grant No. 62373377) and the Projects of State Key Laboratory of Precision Manufacturing for Extreme Service Performance under grants ZZYJKT2025-08 and ZZYJKT2025-10.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed at the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Scolari, S.; Dall’Osto, G.; Tuveri, A.; Mombelli, D.; Mapelli, C. Optimization of Red Mud and Blast Furnace Sludge Self-Reducing Briquettes Propaedeutic for Subsequent Magnetic Separation. Metals 2025, 15, 1108. [Google Scholar] [CrossRef]
- Xu, D.; Li, Z.; Chen, X.; Wang, Z.; Wu, J. A dielectric-filled waveguide antenna element for 3D imaging radar in high temperature and excessive dust conditions. Sensors 2016, 16, 1339. [Google Scholar] [CrossRef]
- Wu, H.; Yu, L.; Chang, S.; Zhang, Y.; Yang, J. Microstructure evolution behavior of blast-furnace coke under different gasification reaction conditions. Coatings 2022, 12, 1116. [Google Scholar] [CrossRef]
- Mio, H.; Narita, Y.; Nakano, K.; Nomura, S. Validation of the burden distribution of the 1/3-scale of a blast furnace simulated by the discrete element method. Processes 2019, 8, 6. [Google Scholar] [CrossRef]
- La, G.H.; Choi, J.S.; Min, D.J. Investigation on the reaction behaviour of partially reduced iron under blast furnace conditions. Metals 2021, 11, 839. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, S.; Zhang, J.; Yin, Y.; Xiao, W.; Zhang, Z. Data-driven multiobjective optimization for burden surface in blast furnace with feedback compensation. IEEE Trans. Ind. Inform. 2019, 16, 2233–2244. [Google Scholar] [CrossRef]
- Liu, R.; Gao, Z.Y.; Li, H.Y.; Liu, X.J.; Lv, Q. Research on Blast Furnace Ingredient Optimization Based on Improved Grey Wolf Optimization Algorithm. Metals 2024, 14, 798. [Google Scholar] [CrossRef]
- Chen, Z.; Jiang, Z.; Gui, W.; Yang, C. A novel device for optical imaging of blast furnace burden surface: Parallel low-light-loss backlight high-temperature industrial endoscope. IEEE Sens. J. 2016, 16, 6703–6717. [Google Scholar] [CrossRef]
- Yi, Z.; Chen, Z.; Jiang, Z.; Gui, W. A novel 3-D high-temperature industrial endoscope with large field depth and wide field. IEEE Trans. Instrum. Meas. 2020, 69, 6530–6543. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, X.; Gui, W.; Zhu, J.; Yang, C.; Jiang, Z. A novel sensing imaging equipment under extremely dim light for blast furnace burden surface: Starlight high-temperature industrial endoscope. IEEE/CAA J. Autom. Sin. 2024, 11, 893–906. [Google Scholar] [CrossRef]
- Sun, S.; Yu, Z.; Zhang, S.; Xiao, W. Future definition and extraction of the blast furnace 3D burden surface based on intelligent algorithms. Appl. Sci. 2022, 12, 12860. [Google Scholar] [CrossRef]
- Shao, S.; Huang, Y.; Shi, H.; Sun, M. Review on simulation and practice of blast furnace burden distribution process. Metall. Res. Technol. 2025, 122, 610. [Google Scholar] [CrossRef]
- Tian, J.; Tanaka, A.; Gao, D.; Liu, Z.; Hou, Q.; Chen, X. Characterization of the Blast Furnace Burden Surface: Experimental Measurement and Roughness Statistics. ISIJ Int. 2023, 63, 1217–1225. [Google Scholar] [CrossRef]
- Xu, T.; Chen, Z.; Jiang, Z.; Huang, J.; Gui, W. A real-time 3D measurement system for the blast furnace burden surface using high-temperature industrial endoscope. Sensors 2020, 20, 869. [Google Scholar] [CrossRef]
- Zhou, D.; Xu, K.; Bai, J.; He, D. On-line detecting the tuyere coke size and temperature distribution of raceway zone in a working blast furnace. Fuel 2022, 316, 123349. [Google Scholar] [CrossRef]
- Duan, Y.; Liu, X.; Liu, R.; Li, X.; Li, H.; Li, H.; Sun, Y.; Zhang, Y.; Lv, Q. A novel anomaly detection and classification algorithm for application in tuyere images of blast furnace. Eng. Appl. Artif. Intell. 2025, 139, 109558. [Google Scholar] [CrossRef]
- Zhao, L.T.; Yang, T.; Yan, R.; Zhao, H.B. Anomaly detection of the blast furnace smelting process using an improved multivariate statistical process control model. Process Saf. Environ. Prot. Trans. Inst. Chem. Eng. Part B 2022, 166, 617–627. [Google Scholar] [CrossRef]
- Guo, K.; Zhang, Y.X.; Zhang, S.; Xiao, W.D. Classification model for blast furnace status based on multi-source information. Eng. Appl. Artif. Intell. 2025, 141, 109728. [Google Scholar] [CrossRef]
- Lee, N.; Shin, M.; Sagingalieva, A.; Tripathi, A.J.; Pinto, K.; Melnikov, A. Predictive control of blast furnace temperature in steelmaking with hybrid depth-infused quantum neural networks. arXiv 2025, arXiv:2504.12389. [Google Scholar] [CrossRef]
- Zhi-cheng, H.; Chuan, W.; Hang, Y.; Ming, Z. Image detail enhancement method based on multi-scale bilateral texture filter. Chin. Opt. 2016, 9, 423–431. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Shocher, A.; Dravid, A.; Gandelsman, Y.; Mosseri, I.; Rubinstein, M.; Efros, A.A. Idempotent generative network. arXiv 2023, arXiv:2311.01462. [Google Scholar] [CrossRef]
- Wen, L.; Li, X.; Gao, L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl. 2020, 32, 6111–6124. [Google Scholar] [CrossRef]
- Alhichri, H.; Alswayed, A.S.; Bazi, Y.; Ammour, N.; Alajlan, N.A. Classification of remote sensing images using EfficientNet-B3 CNN model with attention. IEEE Access 2021, 9, 14078–14094. [Google Scholar] [CrossRef]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–19 June 2019; pp. 5693–5703. [Google Scholar]
- Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 87–110. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).









