Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = high dimensional stream analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 5553 KiB  
Article
Data-Driven Multi-Scale Channel-Aligned Transformer for Low-Carbon Autonomous Vessel Operations: Enhancing CO2 Emission Prediction and Green Autonomous Shipping Efficiency
by Jiahao Ni, Hongjun Tian, Kaijie Zhang, Yihong Xue and Yang Xiong
J. Mar. Sci. Eng. 2025, 13(6), 1143; https://doi.org/10.3390/jmse13061143 - 9 Jun 2025
Viewed by 453
Abstract
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. [...] Read more.
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. This study proposes a Multi-scale Channel-aligned Transformer (MCAT) model, integrated with a 5G–satellite–IoT communication architecture, to address these challenges. The MCAT model employs multi-scale token reconstruction and a dual-level attention mechanism, effectively capturing spatiotemporal dependencies in heterogeneous data streams (AIS, sensors, weather) while suppressing high-frequency noise. To enable seamless data collaboration, a hybrid transmission framework combining satellite (Inmarsat/Iridium), 5G URLLC slicing, and industrial Ethernet is designed, achieving ultra-low latency (10 ms) and nanosecond-level synchronization via IEEE 1588v2. Validated on a 22-dimensional real autonomous vessel dataset, MCAT reduces prediction errors by 12.5% MAE and 24% MSE compared to state-of-the-art methods, demonstrating superior robustness under noisy scenarios. Furthermore, the proposed architecture supports smart autonomous shipping solutions by providing demonstrably interpretable emission insights through its dual-level attention mechanism (visualized via attention maps) for route optimization, fuel efficiency enhancement, and compliance with CII regulations. This research bridges AI-driven predictive analytics with green autonomous shipping technologies, offering a scalable framework for digitalized and sustainable maritime operations. Full article
(This article belongs to the Special Issue Sustainable Maritime Transport and Port Intelligence)
Show Figures

Figure 1

28 pages, 10436 KiB  
Article
ParDP: A Parallel Density Peaks-Based Clustering Algorithm
by Libero Nigro and Franco Cicirelli
Mathematics 2025, 13(8), 1285; https://doi.org/10.3390/math13081285 - 14 Apr 2025
Viewed by 329
Abstract
This paper proposes ParDP, an algorithm and concrete tool for unsupervised clustering, which belongs to the class of density peaks-based clustering methods. Such methods rely on the observation that cluster representative points (centroids) are points of higher local density surrounded by points of [...] Read more.
This paper proposes ParDP, an algorithm and concrete tool for unsupervised clustering, which belongs to the class of density peaks-based clustering methods. Such methods rely on the observation that cluster representative points (centroids) are points of higher local density surrounded by points of lesser density. Candidate centroids, though, are to be far from each other. A key factor of ParDP is adopting a k-Nearest Neighbors (kNN) technique for estimating the density of points. Complete clustering depends on densities and distances among points. ParDP uses principal component analysis to cope with high-dimensional data points. The current implementation relies on Java parallel streams and the built-in lock-free fork/join mechanism, enabling the exploitation of the computing power of commodity multi/many-core machines. This paper demonstrates ParDP’s clustering capabilities by applying it to several benchmark and real-world datasets. ParDP’s operation can either be directed to observe the number of clusters in a dataset or to finalize clustering with an assigned number of clusters. Different internal and external measures can be used to assess the accuracy of a resultant clustering solution. Full article
Show Figures

Figure 1

19 pages, 23056 KiB  
Article
From Hazard Maps to Action Plans: Comprehensive Flood Risk Mitigation in the Susurluk Basin
by Ibrahim Ucar, Masun Kapcak, Osman Sonmez, Emrah Dogan, Burak Turan, Mustafa Dal, Satuk Bugra Findik, Mesut Yilmaz and Afire Sever
Water 2025, 17(6), 860; https://doi.org/10.3390/w17060860 - 17 Mar 2025
Cited by 2 | Viewed by 596
Abstract
Floods pose significant risks worldwide, impacting lives, infrastructure, and economies. The Susurluk basin, covering 24,319 km2 in Türkiye, is highly vulnerable to flooding. This study updates the flood management plan for the basin, integrating hydrological modeling, GIS-based flood mapping, and early warning [...] Read more.
Floods pose significant risks worldwide, impacting lives, infrastructure, and economies. The Susurluk basin, covering 24,319 km2 in Türkiye, is highly vulnerable to flooding. This study updates the flood management plan for the basin, integrating hydrological modeling, GIS-based flood mapping, and early warning system evaluations in alignment with the EU Flood Directive. A total of 503 hydrodynamic models (226 one-dimensional and 277 two-dimensional) were developed, analyzing 2116 km of stream length. As a result of the evaluation, the capacities of only 33 streams were found to be sufficient. Flood hazard and risk maps for the Q50, Q100, Q500, and Q1000 return periods identified the remaining 470 high-risk locations as requiring urgent intervention. Economic risk assessments revealed significant exposure of critical infrastructure, especially in urban areas with populations over 100,000. Furthermore, the study introduces a prioritization framework for intervention that balances socioeconomic costs and environmental impacts. Economic damage assessments estimate potential losses in critical infrastructure, including residential areas, industrial zones, and transportation networks. The findings highlight the importance of proactive flood risk mitigation strategies, particularly in high-risk urban centers. Overall, this study provides a data-driven, replicable model for flood risk management, emphasizing early warning systems, spatial analysis, and structural/non-structural mitigation measures. The insights gained from this research can guide policymakers and urban planners in developing adaptive, long-term flood management strategies for flood-prone regions. Full article
Show Figures

Figure 1

22 pages, 7837 KiB  
Article
Online Monitoring and Fault Diagnosis for High-Dimensional Stream with Application in Electron Probe X-Ray Microanalysis
by Tao Wang, Yunfei Guo, Fubo Zhu and Zhonghua Li
Entropy 2025, 27(3), 297; https://doi.org/10.3390/e27030297 - 13 Mar 2025
Viewed by 674
Abstract
This study introduces an innovative two-stage framework for monitoring and diagnosing high-dimensional data streams with sparse changes. The first stage utilizes an exponentially weighted moving average (EWMA) statistic for online monitoring, identifying change points through extreme value theory and multiple hypothesis testing. The [...] Read more.
This study introduces an innovative two-stage framework for monitoring and diagnosing high-dimensional data streams with sparse changes. The first stage utilizes an exponentially weighted moving average (EWMA) statistic for online monitoring, identifying change points through extreme value theory and multiple hypothesis testing. The second stage involves a fault diagnosis mechanism that accurately pinpoints abnormal components upon detecting anomalies. Through extensive numerical simulations and electron probe X-ray microanalysis applications, the method demonstrates exceptional performance. It rapidly detects anomalies, often within one or two sampling intervals post-change, achieves near 100% detection power, and maintains type-I error rates around the nominal 5%. The fault diagnosis mechanism shows a 99.1% accuracy in identifying components in 200-dimensional anomaly streams, surpassing principal component analysis (PCA)-based methods by 28.0% in precision and controlling the false discovery rate within 3%. Case analyses confirm the method’s effectiveness in monitoring and identifying abnormal data, aligning with previous studies. These findings represent significant progress in managing high-dimensional sparse-change data streams over existing methods. Full article
Show Figures

Figure 1

26 pages, 5763 KiB  
Article
Incremental Pyraformer–Deep Canonical Correlation Analysis: A Novel Framework for Effective Fault Detection in Dynamic Nonlinear Processes
by Yucheng Ding, Yingfeng Zhang, Jianfeng Huang and Shitong Peng
Algorithms 2025, 18(3), 130; https://doi.org/10.3390/a18030130 - 25 Feb 2025
Viewed by 740
Abstract
Smart manufacturing systems aim to enhance the efficiency, adaptability, and reliability of industrial operations through advanced data-driven approaches. Achieving these objectives hinges on accurate fault detection and timely maintenance, especially in highly dynamic industrial environments. However, capturing nonlinear and temporal dependencies in dynamic [...] Read more.
Smart manufacturing systems aim to enhance the efficiency, adaptability, and reliability of industrial operations through advanced data-driven approaches. Achieving these objectives hinges on accurate fault detection and timely maintenance, especially in highly dynamic industrial environments. However, capturing nonlinear and temporal dependencies in dynamic nonlinear industrial processes poses significant challenges for traditional data-driven fault detection methods. To address these limitations, this study presents an Incremental Pyraformer–Deep Canonical Correlation Analysis (DCCA) framework that integrates the Pyramidal Attention Mechanism of the Pyraformer with the Broad Learning System for incremental learning in a DCCA basis. The Pyraformer model effectively captures multi-scale temporal features, while the BLS-based incremental learning mechanism adapts to evolving data without full retraining. The proposed framework enhances both spatial and temporal representation, enabling robust fault detection in high-dimensional and continuously changing industrial environments. Experimental validation on the Tennessee Eastman (TE) process, Continuous Stirred-Tank Reactor (CSTR) system, and injection molding process demonstrated superior detection performance. In the TE scenario, our framework achieved a 100% Fault Detection Rate with a 4.35% False Alarm Rate, surpassing DCCA variants. Similarly, in the CSTR case, the approach reached a perfect 100% Fault Detection Rate (FDR) and 3.48% False Alarm Rate (FAR), while in the injection molding process, it delivered a 97.02% FDR with 0% FAR. The findings underline the framework’s effectiveness in handling complex and dynamic data streams, thereby providing a powerful approach for real-time monitoring and proactive maintenance. Full article
(This article belongs to the Special Issue Optimization Methods for Advanced Manufacturing)
Show Figures

Graphical abstract

15 pages, 1635 KiB  
Article
Optimizing IoT Video Data: Dimensionality Reduction for Efficient Deep Learning on Edge Computing
by David Ortiz-Perez, Pablo Ruiz-Ponce, David Mulero-Pérez, Manuel Benavent-Lledo, Javier Rodriguez-Juan, Hugo Hernandez-Lopez, Anatoli Iarovikov, Srdjan Krco, Daliborka Nedic, Dejan Vukobratovic and Jose Garcia-Rodriguez
Future Internet 2025, 17(2), 53; https://doi.org/10.3390/fi17020053 - 21 Jan 2025
Viewed by 997
Abstract
The rapid loss of biodiversity significantly impacts birds’ environments and behaviors, highlighting the importance of analyzing bird behavior for ecological insights. With the growing adoption of Machine Learning (ML) algorithms in the Internet of Things (IoT) domain, edge computing has become essential to [...] Read more.
The rapid loss of biodiversity significantly impacts birds’ environments and behaviors, highlighting the importance of analyzing bird behavior for ecological insights. With the growing adoption of Machine Learning (ML) algorithms in the Internet of Things (IoT) domain, edge computing has become essential to ensure data privacy and enable real-time predictions by processing high-dimensional data, such as video streams, efficiently. This paper introduces a set of dimensionality reduction techniques tailored for video sequences based on cutting-edge methods for this data representation. These methods drastically compress video data, reducing bandwidth and storage requirements while enabling the creation of compact ML models with faster inference speeds. Comprehensive experiments on bird behavior classification in rural environments demonstrate the effectiveness of the proposed techniques. The experiments incorporate state-of-the-art deep learning techniques, including pre-trained video vision models, Autoencoders, and single-frame feature extraction. These methods demonstrated superior performance to the baseline, achieving up to a 6000-fold reduction in data size while reaching a classification accuracy of 60.7% on the Visual WetlandBirds Dataset and obtaining state-of-the-art performance on this dataset. These findings underline the potential of using dimensionality reduction to enhance the scalability and efficiency of bird behavior analysis. Full article
Show Figures

Figure 1

18 pages, 6210 KiB  
Article
Research on Small Sample Rolling Bearing Fault Diagnosis Method Based on Mixed Signal Processing Technology
by Peibo Yu, Jianjie Zhang, Baobao Zhang, Jianhui Cao and Yihang Peng
Symmetry 2024, 16(9), 1178; https://doi.org/10.3390/sym16091178 - 9 Sep 2024
Cited by 6 | Viewed by 1439
Abstract
The diagnosis of bearing faults is a crucial aspect of ensuring the optimal functioning of mechanical equipment. However, in practice, the use of small samples and variable operating conditions may result in suboptimal generalization performance, reduced accuracy, and overfitting for these methods. To [...] Read more.
The diagnosis of bearing faults is a crucial aspect of ensuring the optimal functioning of mechanical equipment. However, in practice, the use of small samples and variable operating conditions may result in suboptimal generalization performance, reduced accuracy, and overfitting for these methods. To address this challenge, this study proposes a bearing fault diagnosis method based on a symmetric two-stream convolutional neural network (CNN). The method employs hybrid signal processing techniques to address the issue of limited data. The method employs a symmetric parallel convolutional neural network (CNN) for the analysis of bearing data. Initially, the data are transformed into time–frequency maps through the utilization of the short-time Fourier transform (STFT) and the simultaneous compressed wavelet transform (SCWT). Subsequently, two sets of one-dimensional vectors are generated by reconstructing the high-resolution features of the faulty samples using a symmetric parallel convolutional neural network (CNN). Feature splicing and fusion are then performed to generate bearing fault diagnosis information and assist fault classification. The experimental results demonstrate that the proposed mixed-signal processing method is effective on small-sample datasets, and verify the feasibility and generality of the symmetric parallel CNN-support vector machine (SVM) model for bearing fault diagnosis under small-sample conditions. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

20 pages, 7336 KiB  
Article
Spectral Features Analysis for Print Quality Prediction in Additive Manufacturing: An Acoustics-Based Approach
by Michael Olowe, Michael Ogunsanya, Brian Best, Yousef Hanif, Saurabh Bajaj, Varalakshmi Vakkalagadda, Olukayode Fatoki and Salil Desai
Sensors 2024, 24(15), 4864; https://doi.org/10.3390/s24154864 - 26 Jul 2024
Cited by 9 | Viewed by 1663
Abstract
Quality prediction in additive manufacturing (AM) processes is crucial, particularly in high-risk manufacturing sectors like aerospace, biomedicals, and automotive. Acoustic sensors have emerged as valuable tools for detecting variations in print patterns by analyzing signatures and extracting distinctive features. This study focuses on [...] Read more.
Quality prediction in additive manufacturing (AM) processes is crucial, particularly in high-risk manufacturing sectors like aerospace, biomedicals, and automotive. Acoustic sensors have emerged as valuable tools for detecting variations in print patterns by analyzing signatures and extracting distinctive features. This study focuses on the collection, preprocessing, and analysis of acoustic data streams from a Fused Deposition Modeling (FDM) 3D-printed sample cube (10 mm × 10 mm × 5 mm). Time and frequency-domain features were extracted at 10-s intervals at varying layer thicknesses. The audio samples were preprocessed using the Harmonic–Percussive Source Separation (HPSS) method, and the analysis of time and frequency features was performed using the Librosa module. Feature importance analysis was conducted, and machine learning (ML) prediction was implemented using eight different classifier algorithms (K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Gaussian Naive Bayes (GNB), Decision Trees (DT), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGB), and Light Gradient Boosting Machine (LightGBM)) for the classification of print quality based on the labeled datasets. Three-dimensional-printed samples with varying layer thicknesses, representing two print quality levels, were used to generate audio samples. The extracted spectral features from these audio samples served as input variables for the supervised ML algorithms to predict print quality. The investigation revealed that the mean of the spectral flatness, spectral centroid, power spectral density, and RMS energy were the most critical acoustic features. Prediction metrics, including accuracy scores, F-1 scores, recall, precision, and ROC/AUC, were utilized to evaluate the models. The extreme gradient boosting algorithm stood out as the top model, attaining a prediction accuracy of 91.3%, precision of 88.8%, recall of 92.9%, F-1 score of 90.8%, and AUC of 96.3%. This research lays the foundation for acoustic based quality prediction and control of 3D printed parts using Fused Deposition Modeling and can be extended to other additive manufacturing techniques. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
Show Figures

Figure 1

35 pages, 17176 KiB  
Article
Simulation of Retrospective Morphological Channel Adjustments Using High-Resolution Differential Digital Elevation Models versus Predicted Sediment Delivery and Stream Power Variations
by Carmelo Conesa-García, Alberto Martínez-Salvador, Carlos Puig-Mengual, Francisco Martínez-Capel and Pedro Pérez-Cutillas
Water 2023, 15(15), 2697; https://doi.org/10.3390/w15152697 - 26 Jul 2023
Viewed by 1689
Abstract
This work proposes a methodological approach applied to ephemeral gravel-bed streams to verify the change in the magnitude and frequency of hydrological events affecting the morphological dynamics and sediment budget in this type of channel. For the case study, the Azohía Rambla, located [...] Read more.
This work proposes a methodological approach applied to ephemeral gravel-bed streams to verify the change in the magnitude and frequency of hydrological events affecting the morphological dynamics and sediment budget in this type of channel. For the case study, the Azohía Rambla, located in southeastern Spain, was chosen, emphasizing the research on two reference riverbed sections (RCRs): an upper one, with a predominance of erosion, and a middle one, where processes of incision, transport, and deposition were involved. First, this approach focuses on relationships between peak discharges and sediment budgets during the period 2018–2022. For this purpose, water level measurements from pressure sensors, a One-Dimensional Hydrodynamic model, and findings from comparative analyses of high-resolution differential digital elevation models (HRDEM of Difference-HRDoD) based on SfM-MVS and LiDAR datasets were used. In a second phase, the GeoWEPP model was applied to the period 1996–2022 in order to simulate runoff and sediment yield at the event scale for the watersheds draining into both RCRs. During the calibration phase, a sensitivity analysis was carried out to detect the most influential parameters in the model and confirm its capacity to simulate peak flow and sediment delivery in the area described above. Values of NS (Nash–Sutcliffe efficiency) and PBIAS (percent bias) equal to 0.86 and 7.81%, respectively, were found in the calibration period, while these indices were 0.81 and −4.1% in the validation period. Finally, different event class patterns (ECPs) were established for the monitoring period (2018–2022), according to flow stage and morphological channel adjustments (overtopping, bankfull and sub-bankfull, and half-sub-bankfull), and then retrospectively extrapolated to stages of the prior simulated period (1996–2018) from their typical sequences (PECPs). The results revealed a significant increase in the number of events and PECPs leading to lower bed incision rates and higher vertical accretion, which denotes a progressive increase in bed armoring and bank erosion processes. Full article
Show Figures

Figure 1

19 pages, 3296 KiB  
Article
Deep Reinforcement Learning-Based Video Offloading and Resource Allocation in NOMA-Enabled Networks
by Siyu Gao, Yuchen Wang, Nan Feng, Zhongcheng Wei and Jijun Zhao
Future Internet 2023, 15(5), 184; https://doi.org/10.3390/fi15050184 - 18 May 2023
Cited by 10 | Viewed by 2269
Abstract
With the proliferation of video surveillance system deployment and related applications, real-time video analysis is very critical to achieving intelligent monitoring, autonomous driving, etc. Analyzing video stream with high accuracy and low latency through the traditional cloud computing represents a non-trivial problem. In [...] Read more.
With the proliferation of video surveillance system deployment and related applications, real-time video analysis is very critical to achieving intelligent monitoring, autonomous driving, etc. Analyzing video stream with high accuracy and low latency through the traditional cloud computing represents a non-trivial problem. In this paper, we propose a non-orthogonal multiple access (NOMA)-based edge real-time video analysis framework with one edge server (ES) and multiple user equipments (UEs). A cost minimization problem composed of delay, energy and accuracy is formulated to improve the quality of experience (QoE) of the UEs. In order to efficiently solve this problem, we propose the joint video frame resolution scaling, task offloading, and resource allocation algorithm based on the Deep Q-Learning Network (JVFRS-TO-RA-DQN), which effectively overcomes the sparsity of the single-layer reward function and accelerates the training convergence speed. JVFRS-TO-RA-DQN consists of two DQN networks to reduce the curse of dimensionality, which, respectively, select the offloading and resource allocation action, as well as the resolution scaling action. The experimental results show that JVFRS-TO-RA-DQN can effectively reduce the cost of edge computing and has better performance in terms of convergence compared to other baseline schemes. Full article
Show Figures

Figure 1

29 pages, 4534 KiB  
Article
Geospatial Modeling Based-Multi-Criteria Decision-Making for Flash Flood Susceptibility Zonation in an Arid Area
by Mohamed Shawky and Quazi K. Hassan
Remote Sens. 2023, 15(10), 2561; https://doi.org/10.3390/rs15102561 - 14 May 2023
Cited by 18 | Viewed by 3985
Abstract
Identifying areas susceptible to flash flood hazards is essential to mitigating their negative impacts, particularly in arid regions. For example, in southeastern Sinai, the Egyptian government seeks to develop its coastal areas along the Gulf of Aqaba to maximize its national economy while [...] Read more.
Identifying areas susceptible to flash flood hazards is essential to mitigating their negative impacts, particularly in arid regions. For example, in southeastern Sinai, the Egyptian government seeks to develop its coastal areas along the Gulf of Aqaba to maximize its national economy while preserving sustainable development standards. The current study aims to map and predict flash flood prone areas utilizing a spatial analytic hierarchy process (AHP) that integrates GIS capabilities, remote sensing datasets, the NASA Giovanni web tool application, and principal component analysis (PCA). Nineteen flash flood triggering parameters were initially considered for developing the susceptibility model by conducting a detailed literature review and using our experiences in the flash food studies. Next, the PCA algorithm was utilized to reduce the subjective nature of the researchers’ judgments in selecting flash flood triggering factors. By reducing the dimensionality of the data, we eliminated ten explanatory variables, and only nine relatively less correlated factors were retained, which prevented the creation of an ill-structured model. Finally, the AHP method was utilized to determine the relative weights of the nine spatial factors based on their significance in triggering flash floods. The resulting weights were as follows: rainfall (RF = 0.310), slope (S = 0.221), drainage density (DD = 0.158), geology (G = 0.107), height above nearest drainage network (HAND = 0.074), landforms (LF = 0.051), Melton ruggedness number (MRN = 0.035), plan curvature (PnC = 0.022), and stream power index (SPI = 0.022). The current research proved that AHP, among the most dependable methods for multi-criteria decision-making (MCDM), can effectively classify the degree of flash flood risk in ungauged arid areas. The study found that 59.2% of the area assessed was at very low and low risk of a flash flood, 21% was at very high and high risk, and 19.8% was at moderate risk. Using the area under the receiver operating characteristic curve (AUC ROC) as a statistical evaluation metric, the GIS-based AHP model developed demonstrated excellent predictive accuracy, achieving a score of 91.6%. Full article
(This article belongs to the Special Issue Remote Sensing of Floods: Progress, Challenges and Opportunities)
Show Figures

Figure 1

49 pages, 4168 KiB  
Review
Heat Transfer Analysis Using Thermofluid Network Models for Industrial Biomass and Utility Scale Coal-Fired Boilers
by Pieter Rousseau, Ryno Laubscher and Brad Travis Rawlins
Energies 2023, 16(4), 1741; https://doi.org/10.3390/en16041741 - 9 Feb 2023
Cited by 6 | Viewed by 3440
Abstract
Integrated whole-boiler process models are useful in the design of biomass and coal-fired boilers, and they can also be used to analyse different scenarios such as low load operation and alternate fuel firing. Whereas CFD models are typically applied to analyse the detail [...] Read more.
Integrated whole-boiler process models are useful in the design of biomass and coal-fired boilers, and they can also be used to analyse different scenarios such as low load operation and alternate fuel firing. Whereas CFD models are typically applied to analyse the detail heat transfer phenomena in furnaces, analysis of the integrated whole-boiler performance requires one-dimensional thermofluid network models. These incorporate zero-dimensional furnace models combined with the solution of the fundamental mass, energy, and momentum balance equations for the different heat exchangers and fluid streams. This approach is not new, and there is a large amount of information available in textbooks and technical papers. However, the information is fragmented and incomplete and therefore difficult to follow and apply. The aim of this review paper is therefore to: (i) provide a review of recent literature to show how the different approaches to boiler modelling have been applied; (ii) to provide a review and clear description of the thermofluid network modelling methodology, including the simplifying assumptions and its implications; and (iii) to demonstrate the methodology by applying it to two case study boilers with different geometries, firing systems and fuels at various loads, and comparing the results to site measurements, which highlight important aspects of the methodology. The model results compare well with values obtained from site measurements and detail CFD models for full load and part load operation. The results show the importance of utilising the high particle load model for the effective emissivity and absorptivity of the flue gas and particle suspension rather than the standard model, especially in the case of a high ash fuel. It also shows that the projected method provides better results than the direct method for the furnace water wall heat transfer. Full article
(This article belongs to the Special Issue Heat Transfer Analysis and Modeling in Furnaces and Boilers)
Show Figures

Figure 1

25 pages, 8317 KiB  
Article
Computational Design Analysis of a Hydrokinetic Horizontal Parallel Stream Direct Drive Counter-Rotating Darrieus Turbine System: A Phase One Design Analysis Study
by John M. Crooks, Rodward L. Hewlin and Wesley B. Williams
Energies 2022, 15(23), 8942; https://doi.org/10.3390/en15238942 - 26 Nov 2022
Cited by 10 | Viewed by 2802
Abstract
This paper introduces a phase one computational design analysis study of a hydrokinetic horizontal parallel stream direct-drive (no gear box) counter-rotating Darrieus turbine system. This system consists of two Darrieus rotors that are arranged in parallel and horizontal to the water stream and [...] Read more.
This paper introduces a phase one computational design analysis study of a hydrokinetic horizontal parallel stream direct-drive (no gear box) counter-rotating Darrieus turbine system. This system consists of two Darrieus rotors that are arranged in parallel and horizontal to the water stream and operate in counter-rotation due to the incoming flow. One of the rotors directly drives an armature coil rotor and the other one a permanent magnet generator. A two-dimensional (2-D) and three-dimensional (3-D) computational fluid dynamic (CFD) simulation study was conducted to assess the hydrokinetic performance of the design. From a high computational cost and time perspective, the simulation setup was reduced from a 3-D to a 2-D analysis. Although useful information was obtained from the 3-D simulations, the output performance could be assessed with the 2-D simulations without compromising the integrity of the turbine output results. A scaled experimental design prototype was developed for static (non-movement of the rotors with dynamic fluid flow) particle image velocimetry (PIV) studies. The PIV studies were used as a benchmark for validating and verifying the CFD simulations. This paper outlines the prototype development, PIV experimental setup and results, computational simulation setup and results, as well as recommendations for future work that could potentially improve overall performance of the proposed design. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

23 pages, 9463 KiB  
Article
Flame Anchoring of an H2/O2 Non-Premixed Flamewith O2 Transcritical Injection
by Eugenio Giacomazzi, Donato Cecere and Nunzio Arcidiacono
Aerospace 2022, 9(11), 707; https://doi.org/10.3390/aerospace9110707 - 11 Nov 2022
Cited by 2 | Viewed by 2329
Abstract
The article is devoted to the analysis of the flame anchoring mechanism in the test case MASCOTTE C-60 RCM2 on supercritical hydrogen/oxygen combustion at 60 bar, with transcritical (liquid) injection of oxygen. The case is simulated by means of the in-house parallel code [...] Read more.
The article is devoted to the analysis of the flame anchoring mechanism in the test case MASCOTTE C-60 RCM2 on supercritical hydrogen/oxygen combustion at 60 bar, with transcritical (liquid) injection of oxygen. The case is simulated by means of the in-house parallel code HeaRT in the three-dimensional LES framework. The cubic Peng–Robinson equation of state in its improved translated volume formulation is assumed. Diffusive mechanisms and transport properties are accurately modeled. A finite-rate detailed scheme involving the main radicals, already validated for high-pressure H2/O2 combustion, is adopted. The flow is analysed in terms of temperature, hydrogen and oxygen instantaneous spatial distributions, evidencing the effects of the vortex shedding from the edges of the hydrogen injector and of the separation of the oxygen stream in the divergent section of its tapered injector on the flame anchoring and topology. Combustion conditions are characterised by looking at the equivalence ratio and compressibility factor distributions. Full article
(This article belongs to the Special Issue Large-Eddy Simulation Applications of Combustion Systems)
Show Figures

Figure 1

22 pages, 5452 KiB  
Article
Remote Sensing Methodology for Roughness Estimation in Ungauged Streams for Different Hydraulic/Hydrodynamic Modeling Approaches
by George Papaioannou, Vassiliki Markogianni, Athanasios Loukas and Elias Dimitriou
Water 2022, 14(7), 1076; https://doi.org/10.3390/w14071076 - 29 Mar 2022
Cited by 9 | Viewed by 4347
Abstract
This study investigates the generation of spatially distributed roughness coefficient maps based on image analysis and the extent to which those roughness coefficient values affect the flood inundation modeling using different hydraulic/hydrodynamic modeling approaches ungauged streams. Unmanned Aerial Vehicle (UAV) images were used [...] Read more.
This study investigates the generation of spatially distributed roughness coefficient maps based on image analysis and the extent to which those roughness coefficient values affect the flood inundation modeling using different hydraulic/hydrodynamic modeling approaches ungauged streams. Unmanned Aerial Vehicle (UAV) images were used for the generation of high-resolution Orthophoto mosaic (1.34 cm/px) and Digital Elevation Model (DEM). Among various pixel-based and object-based image analyses (OBIA), a Grey-Level Co-occurrence Matrix (GLCM) was eventually selected to examine several texture parameters. The combination of local entropy values (OBIA method) with Maximum Likelihood Classifier (MLC; pixel-based analysis) was highlighted as a satisfactory approach (65% accuracy) to determine dominant grain classes along a stream with inhomogeneous bed composition. Spatially distributed roughness coefficient maps were generated based on the riverbed image analysis (grain size classification), the size-frequency distributions of river bed materials derived from field works (grid sampling), detailed land use data, and the usage of several empirical formulas that used for the estimation of Manning’s n values. One-dimensional (1D), two-dimensional (2D), and coupled (1D/2D) hydraulic modeling approaches were used for flood inundation modeling using specific Manning’s n roughness coefficient map scenarios. The validation of the simulated flooded area was accomplished using historical flood extent data, the Critical Success Index (CSI), and CSI penalization. The methodology was applied and demonstrated at the ungauged Xerias stream reach, Greece, and indicated that it might be applied to other Mediterranean streams with similar characteristics and flow conditions. Full article
(This article belongs to the Special Issue Application of Smart Technologies in Water Resources Management)
Show Figures

Figure 1

Back to TopTop