Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = self-cleaning filter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 3678 KiB  
Article
Plug-and-Play Self-Supervised Denoising for Pulmonary Perfusion MRI
by Changyu Sun, Yu Wang, Cody Thornburgh, Ai-Ling Lin, Kun Qing, John P. Mugler and Talissa A. Altes
Bioengineering 2025, 12(7), 724; https://doi.org/10.3390/bioengineering12070724 - 1 Jul 2025
Viewed by 444
Abstract
Pulmonary dynamic contrast-enhanced (DCE) MRI is clinically useful for assessing pulmonary perfusion, but its signal-to-noise ratio (SNR) is limited. A self-supervised learning network-based plug-and-play (PnP) denoising model was developed to improve the image quality of pulmonary perfusion MRI. A dataset of patients with [...] Read more.
Pulmonary dynamic contrast-enhanced (DCE) MRI is clinically useful for assessing pulmonary perfusion, but its signal-to-noise ratio (SNR) is limited. A self-supervised learning network-based plug-and-play (PnP) denoising model was developed to improve the image quality of pulmonary perfusion MRI. A dataset of patients with suspected pulmonary diseases was used. Asymmetric pixel-shuffle downsampling blind-spot network (AP-BSN) training inputs were two-dimensional background-subtracted perfusion images without clean ground truth. The AP-BSN is incorporated into a PnP model (PnP-BSN) for balancing noise control and image fidelity. Model performance was evaluated by SNR, sharpness, and overall image quality from two radiologists. The fractal dimension and k-means segmentation of the pulmonary perfusion images were calculated for comparing denoising performance. The model was trained on 29 patients and tested on 8 patients. The performance of PnP-BSN was compared to denoising convolutional neural network (DnCNN) and a Gaussian filter. PnP-BSN showed the highest reader scores in terms of SNR, sharpness, and overall image quality as scored by two radiologists. The expert scoring results for DnCNN, Gaussian, and PnP-BSN were 2.25 ± 0.65, 2.44 ± 0.73, and 3.56 ± 0.73 for SNR; 2.62 ± 0.52, 2.62 ± 0.52, and 3.38 ± 0.64 for sharpness; and 2.16 ± 0.33, 2.34 ± 0.42, and 3.53 ± 0.51 for overall image quality (p < 0.05 for all). PnP-BSN outperformed DnCNN and a Gaussian filter for denoising pulmonary perfusion MRI, which led to improved quantitative fractal analysis. Full article
Show Figures

Figure 1

28 pages, 567 KiB  
Article
Symmetry-Aware Sequential Recommendation with Dual-Domain Filtering Networks
by Li Li, Yueheng Du and Yingdong Wang
Symmetry 2025, 17(6), 813; https://doi.org/10.3390/sym17060813 - 23 May 2025
Viewed by 371
Abstract
The aim of sequential recommendation (SR) is to predict a user’s next interaction by analyzing their historical behavioral sequences. The proposed framework leverages the inherent symmetry in user behavior patterns, where temporal and spectral representations exhibit complementary structures that can be harmonized for [...] Read more.
The aim of sequential recommendation (SR) is to predict a user’s next interaction by analyzing their historical behavioral sequences. The proposed framework leverages the inherent symmetry in user behavior patterns, where temporal and spectral representations exhibit complementary structures that can be harmonized for robust recommendation. Conventional SR methods predominantly utilize implicit feedback (e.g., clicks and views) as model inputs, whereby observed interactions are treated as positive instances, while unobserved ones are considered negative samples. However, the inherent randomness and diversity in user behaviors inevitably introduce noise into such implicit feedback, potentially compromising the accuracy of recommendations. Recent studies have explored noise mitigation through two primary approaches: temporal-domain methods that reweight interactions to distill clean samples for comprehensive user preference modeling, and frequency-domain techniques that purify item embeddings to reduce the propagation of noise. While temporal approaches excel in sample refinement, frequency-based methods demonstrate superior capability in learning noise-resistant representations through spectral analysis. Motivated by the desire to synergize these complementary advantages, we propose SR-DFN, a novel framework that systematically addresses noise interference through coordinated time–frequency processing. Self-guided sample purification is implemented in the temporal domain of our architecture via adaptive interaction weighting, effectively distilling behaviorally significant patterns. The refined sequence is then transformed into the frequency domain, where learnable spectral filters operate to further attenuate residual noise components while preserving essential preference signals. Drawing on the convolution theorem’s revelation regarding frequency-domain operations capturing behavioral periodicity, we critically examine conventional position encoding schemes and propose an efficient parameterization strategy that eliminates redundant positional embeddings without compromising temporal awareness. Comprehensive experiments conducted on four real-world benchmark datasets demonstrate SR-DFN’s superior performance over state-of-the-art baselines, with ablation studies validating the effectiveness of our dual-domain denoising mechanism. Our findings suggest that coordinated time–frequency processing offers a principled solution for noise-resilient sequential recommendation while challenging conventional assumptions about positional encoding requirements in spectral-based approaches. The symmetry principles underlying our dual-domain approach demonstrate how the balanced processing of temporal and frequency domains can achieve superior noise resilience. Full article
(This article belongs to the Special Issue Symmetry in Intelligent Algorithms)
Show Figures

Figure 1

23 pages, 11814 KiB  
Article
A New Method for Optimizing the Jet-Cleaning Performance of Self-Cleaning Screen Filters: The 3D CFD-ANN-GA Framework
by Zhouyang Qin, Zhaotong Chen, Rui Chen, Jinzhu Zhang, Ningning Liu and Miao Li
Processes 2025, 13(4), 1194; https://doi.org/10.3390/pr13041194 - 15 Apr 2025
Viewed by 426
Abstract
The jet-type self-cleaning screen filter integrates industrial jet-cleaning technology into the self-cleaning process of screen filters in the drip irrigation system, which has the advantages of low water consumption, high cleaning capacity, and wide applicability compared to traditional filters. However, its commercialization faces [...] Read more.
The jet-type self-cleaning screen filter integrates industrial jet-cleaning technology into the self-cleaning process of screen filters in the drip irrigation system, which has the advantages of low water consumption, high cleaning capacity, and wide applicability compared to traditional filters. However, its commercialization faces challenges as the optimal jet cleaning mode and optimization method have not been determined. This study proposes a framework that combines computational fluid dynamics (CFD), artificial neural networks (ANN), and genetic algorithms (GA) for optimizing jet-cleaning parameters to improve the performance. The results show that, among the main influencing parameters of the nozzle, the incident section diameter d and the V-groove half angle β have the most significant effects on the peak wall shear stress, action area, and water consumption for cleaning. The ANN has a higher accuracy in predicting the performance (R2 = 0.9991, MAE = 9.477), and it can effectively replace the CFD model for predicting the jet-cleaning performance and optimizing the parameters. The optimization resulted in a 1.34% reduction in the peak wall shear stress, a 16.82% reduction in cleaning water consumption, and a 7.6% increase in the action area for the optimal model compared to the base model. The optimization framework combining CFD, ANN, and GA can provide an optimal cleaning parameter scheme for jet-type self-cleaning screen filters. Full article
(This article belongs to the Section Automation Control Systems)
Show Figures

Figure 1

18 pages, 2087 KiB  
Article
Meta-Data-Guided Robust Deep Neural Network Classification with Noisy Label
by Jie Lu, Yufeng Wang, Aiju Shi, Jianhua Ma and Qun Jin
Appl. Sci. 2025, 15(4), 2080; https://doi.org/10.3390/app15042080 - 16 Feb 2025
Viewed by 774
Abstract
Deep neural network (DNN)-based classifiers have witnessed great applications in various fields. Unfortunately, the labels of real-world training data are commonly noisy, i.e., the labels of a large percentage of training samples are wrong, which negatively affects the performance of a trained DNN [...] Read more.
Deep neural network (DNN)-based classifiers have witnessed great applications in various fields. Unfortunately, the labels of real-world training data are commonly noisy, i.e., the labels of a large percentage of training samples are wrong, which negatively affects the performance of a trained DNN classifier during inference. Therefore, it is challenging to practically formulate a robust DNN classifier using noisy labels in training. To address the above issue, our work designs an effective architecture for training the robust DNN classifier with noisy labels, named a cross dual-branch network guided by meta-data on a single side (CoNet-MS), in which a small amount of clean data, i.e., meta-data, are used to guide the training of the DNN classifier. Specifically, the contributions of our work are threefold. First, based on the principle of small loss, each branch using the base classifier as a neural network module infers partial samples with pseudo-clean labels, which are then used for training another branch through a cross structure that can alleviate the cumulative impact of mis-inference. Second, a meta-guided module is designed and inserted into the single branch, e.g., the upper branch, which dynamically adjusts the ratio between the observed label and the pseudo-label output by the classifier in the loss function for each training sample. The asymmetric dual-branch design makes two classifiers diverge, which facilitates them to filter different types of noisy labels and avoid confirmation bias in self-training. Finally, thorough experiments demonstrate that the trained classifier with our proposal is more robust: the accuracy of the classifier trained with our proposed CoNet-MS on multiple datasets under various ratios of noisy labels and noise types outperforms other classifiers of learning with noisy labels (LNLs), including the state-of-the-art meta-data-based LNL classifier. Full article
(This article belongs to the Special Issue Cutting-Edge Neural Networks for NLP (Natural Language Processing))
Show Figures

Figure 1

13 pages, 2872 KiB  
Review
Permeable Asphalt Pavements (PAP): Benefits, Clogging Factors and Methods for Evaluation and Maintenance—A Review
by Maria Sousa, Marisa Dinis Almeida, Cristina Fael and Isabel Bentes
Materials 2024, 17(24), 6063; https://doi.org/10.3390/ma17246063 - 11 Dec 2024
Cited by 4 | Viewed by 2032
Abstract
Permeable asphalt pavement (PAP) is an efficient solution to stormwater management, allowing water to infiltrate through its layers. This reduces surface runoff and mitigates urban flooding risks. In addition to these hydrological benefits, PAP enhances water quality by filtering pollutants such as organic [...] Read more.
Permeable asphalt pavement (PAP) is an efficient solution to stormwater management, allowing water to infiltrate through its layers. This reduces surface runoff and mitigates urban flooding risks. In addition to these hydrological benefits, PAP enhances water quality by filtering pollutants such as organic and inorganic materials and microplastics. However, clogging from sediment accumulation in the pavement’s void structure often impairs its performance, reducing infiltration capacity. This review addresses several issues related to PAP, including the factors that contribute to pavement clogging and evaluates current and emerging maintenance strategies, including manual removal, pressure washing, regenerative air sweeping and vacuum truck utilization. Additionally, different methods of assessing clogging using innovative technology such as X-Ray Computed Tomography (CT), as well as a summary of the software used to process these images, are presented and discussed as tools for identifying clogging patterns, analyzing void structure and simulating permeability. This review identifies gaps in existing methodologies and suggests innovative approaches, including the creation of self-cleaning materials designed to prevent sediment buildup, biomimetic designs modeled after natural filtration systems and maintenance protocols designed for targeted clogging depths, to support the optimization of PAP systems and promote their adoption in resilient urban infrastructure designs in alignment with Sustainable Development Goals (SDGs). Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

14 pages, 1887 KiB  
Article
Assessment of Self-Supervised Denoising Methods for Esophageal Speech Enhancement
by Madiha Amarjouf, El Hassan Ibn Elhaj, Mouhcine Chami, Kadria Ezzine and Joseph Di Martino
Appl. Sci. 2024, 14(15), 6682; https://doi.org/10.3390/app14156682 - 31 Jul 2024
Viewed by 1288
Abstract
Esophageal speech (ES) is a pathological voice that is often difficult to understand. Moreover, acquiring recordings of a patient’s voice before a laryngectomy proves challenging, thereby complicating enhancing this kind of voice. That is why most supervised methods used to enhance ES are [...] Read more.
Esophageal speech (ES) is a pathological voice that is often difficult to understand. Moreover, acquiring recordings of a patient’s voice before a laryngectomy proves challenging, thereby complicating enhancing this kind of voice. That is why most supervised methods used to enhance ES are based on voice conversion, which uses healthy speaker targets, things that may not preserve the speaker’s identity. Otherwise, unsupervised methods for ES are mostly based on traditional filters, which cannot alone beat this kind of noise, making the denoising process difficult. Also, these methods are known for producing musical artifacts. To address these issues, a self-supervised method based on the Only-Noisy-Training (ONT) model was applied, consisting of denoising a signal without needing a clean target. Four experiments were conducted using Deep Complex UNET (DCUNET) and Deep Complex UNET with Complex Two-Stage Transformer Module (DCUNET-cTSTM) for assessment. Both of these models are based on the ONT approach. Also, for comparison purposes and to calculate the evaluation metrics, the pre-trained VoiceFixer model was used to restore the clean wave files of esophageal speech. Even with the fact that ONT-based methods work better with noisy wave files, the results have proven that ES can be denoised without the need for clean targets, and hence, the speaker’s identity is retained. Full article
Show Figures

Figure 1

18 pages, 6406 KiB  
Article
Contrastive Learning Joint Regularization for Pathological Image Classification with Noisy Labels
by Wenping Guo, Gang Han, Yaling Mo, Haibo Zhang, Jiangxiong Fang and Xiaoming Zhao
Electronics 2024, 13(13), 2456; https://doi.org/10.3390/electronics13132456 - 22 Jun 2024
Viewed by 1421
Abstract
The annotation of pathological images often introduces label noise, which can lead to overfitting and notably degrade performance. Recent studies have attempted to address this by filtering samples based on the memorization effects of DNNs. However, these methods often require prior knowledge of [...] Read more.
The annotation of pathological images often introduces label noise, which can lead to overfitting and notably degrade performance. Recent studies have attempted to address this by filtering samples based on the memorization effects of DNNs. However, these methods often require prior knowledge of the noise rate or a small, clean validation subset, which is extremely difficult to obtain in real medical diagnosis processes. To reduce the effect of noisy labels, we propose a novel training strategy that enhances noise robustness without prior conditions. Specifically, our approach includes self-supervised regularization to encourage the model to focus more on the intrinsic connections between images rather than relying solely on labels. Additionally, we employ a historical prediction penalty module to ensure consistency between successive predictions, thereby slowing down the model’s shift from memorizing clean labels to memorizing noisy labels. Furthermore, we design an adaptive separation module to perform implicit sample selection and flip the labels of noisy samples identified by this module and mitigate the impact of noisy labels. Comprehensive evaluations of synthetic and real pathological datasets with varied noise levels confirm that our method outperforms state-of-the-art methods. Notably, our noise handling process does not require any prior conditions. Our method achieves highly competitive performance in low-noise scenarios which aligns with current pathological image noise situations, showcasing its potential for practical clinical applications. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

19 pages, 5719 KiB  
Article
A Standardized Treatment Model for Head Loss of Farmland Filters Based on Interaction Factors
by Zhenji Liu, Chenyu Lei, Jie Li, Yangjuan Long and Chen Lu
Agriculture 2024, 14(5), 788; https://doi.org/10.3390/agriculture14050788 - 20 May 2024
Cited by 2 | Viewed by 1362
Abstract
A head loss model for pressureless mesh filters used in farmland irrigation was developed by integrating the four basic test factors: irrigation flow, filter cartridge speed, self-cleaning flow, and initial sand content. The model’s coefficient of determination was found to be 98.61%. Among [...] Read more.
A head loss model for pressureless mesh filters used in farmland irrigation was developed by integrating the four basic test factors: irrigation flow, filter cartridge speed, self-cleaning flow, and initial sand content. The model’s coefficient of determination was found to be 98.61%. Among the basic factors, the total irrigation flow accounted for only 17.20% of the relatively small self-cleaning flow. The contribution of initial sand content was found to be the smallest, with a coefficient of only 0.0166. Furthermore, the contribution rate of the flow term was significantly higher than that of the initial sand content, with a value of 159.73%. In terms of quadratic interaction, the difference between the interaction term of flushing flow and filter cartridge speed, and the interaction term of filter cartridge speed and self-cleaning flow was 38.42%. On the other hand, the difference within this level for the interaction term between initial sand content and filter cartridge speed, as well as the interaction term between irrigation flow and self-cleaning flow, was 2.82%. Finally, through joint optimization of the response surface and model, the optimal values for the irrigation flow rate, filter cartridge speed, self-cleaning flow rate, and initial sand content were determined to be 121.687 m3·h−1, 1.331 r·min−1, 19.980 m3·h−1, and 0.261 g·L−1; the measured minimum head loss was found to be 21.671 kPa. These research findings can serve as a reference for enhancing the design of farmland filters and optimizing irrigation systems. Full article
Show Figures

Figure 1

7 pages, 4175 KiB  
Communication
A High-Energy, Wide-Spectrum, Spatiotemporal Mode-Locked Fiber Laser
by Boyuan Ge, Yajun Lou, Silin Guo, Yue Cai and Xinhai Zhang
Micromachines 2024, 15(5), 644; https://doi.org/10.3390/mi15050644 - 12 May 2024
Cited by 3 | Viewed by 1823
Abstract
In this article, we demonstrate a high-energy, wide-spectrum, spatiotemporal mode-locked (STML) fiber laser. Unlike traditional single-mode fiber lasers, STML fiber lasers theoretically enable mode-locking with various combinations of transverse modes. The laser can deliver two different STML pulse sequences with different pulse widths, [...] Read more.
In this article, we demonstrate a high-energy, wide-spectrum, spatiotemporal mode-locked (STML) fiber laser. Unlike traditional single-mode fiber lasers, STML fiber lasers theoretically enable mode-locking with various combinations of transverse modes. The laser can deliver two different STML pulse sequences with different pulse widths, spectra and beam profiles, due to the different compositions of transverse modes in the output pulses. Moreover, we achieve a wide-spectrum pulsed output with a single-pulse energy of up to 116 nJ, by weakening the spectral filtering and utilizing self-cleaning. Strong spatial and spectral filtering are usually thought to be necessary for achieving STML. Our experiment verifies the necessity of spatial filtering for achieving STML, and we show that weakening unnecessary spectral filtering provides an effective way to increase the pulse energy and spectrum width of mode-locked fiber lasers. Full article
(This article belongs to the Special Issue Fiber Lasers and Applications)
Show Figures

Figure 1

20 pages, 8171 KiB  
Article
Low-Cost Sensor System for Air Purification Process Evaluation
by Arkadiusz Moskal, Wiktor Jagodowicz, Agata Penconek and Krzysztof Zaraska
Sensors 2024, 24(6), 1769; https://doi.org/10.3390/s24061769 - 9 Mar 2024
Cited by 1 | Viewed by 2241
Abstract
With the development of civilisation, the awareness of the impact of versatile aerosol particles on human health and the environment is growing. New advanced materials and techniques are needed to purify the air to reduce this impact. This brings the necessity of fast [...] Read more.
With the development of civilisation, the awareness of the impact of versatile aerosol particles on human health and the environment is growing. New advanced materials and techniques are needed to purify the air to reduce this impact. This brings the necessity of fast and low-cost devices to evaluate the air quality from particulate and gaseous impurities, especially in a place where gas chromatography (GC) techniques are unavailable. Small portable and low-cost systems may work separately or be incorporated into devices responsible for air-cleaning processes, such as filters, smoke adsorbers, or plasma air cleaners. Given the above, this study proposes utilising a self-assembled low-cost system to evaluate air quality, which can be used in many outdoor and indoor applications. ESP32 boards with the wireless communication protocol ESP-NOW were used as the framework of the system. The concentration of aerosol particles was measured using Alphasense sensors. The concentrations of the following gases were measured: NO2, SO2, O3, CO, CO2, and H2S. The system was used to evaluate the quality of air containing tobacco smoke after passing through an actual DBD plasma reactor where the purification occurred. A high amount of reduction in aerosol particles and a reduction in the SO2 concentration were detected. An increase in the NO2 concentration was seen as an undesirable effect. The aerosol particle measurements were compared with those using a professional device (GRIMM, Hamburg, Germany), which showed the same trends in aerosol particle behaviour. The obtained results are auspicious and are a step towards producing a low-cost, efficient system for evaluating air quality as well as indoor and outdoor conditions. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

11 pages, 1126 KiB  
Article
Accuracy of Holographic Real-Time Mode Decomposition Methods Used for Multimode Fiber Laser Emission
by Denis S. Kharenko, Alexander A. Revyakin, Mikhail D. Gervaziev, Mario Ferraro, Fabio Mangini and Sergey A. Babin
Photonics 2023, 10(11), 1245; https://doi.org/10.3390/photonics10111245 - 9 Nov 2023
Cited by 5 | Viewed by 1638
Abstract
Mode decomposition is a powerful tool for analyzing the modal content of optical multimode radiation. There are several basic principles on which this tool can be implemented, including near-field intensity analysis, machine learning, and spatial correlation filtering (SCF). The latter is meant to [...] Read more.
Mode decomposition is a powerful tool for analyzing the modal content of optical multimode radiation. There are several basic principles on which this tool can be implemented, including near-field intensity analysis, machine learning, and spatial correlation filtering (SCF). The latter is meant to be applied to a spatial light modulator and allows one to obtain information on the mode amplitudes and phases of temporally stable beams by only analyzing experimental data. As a matter of fact, techniques based on SCF have already been successfully used in several studies, e.g., for investigating the Kerr beam self-cleaning effect and determining the modal content of Raman fiber lasers. Still, such techniques have a major drawback, i.e., they require acquisition times as long as several minutes, thus being unfit for the investigation of fast mode distribution dynamics. In this paper, we numerically study three types of digital holograms, which permits us to determine, at the same time, the parameters of a set of modes of multimode beams. Because all modes are simultaneously characterized, the processing speed of these real-time mode decomposition methods in experimental realizations will be limited only by the acquisition rate of imaging devices, e.g., state-of-the-art CCD camera performance may provide decomposing rates above 1 kHz. Here, we compare the accuracy of conjugate symmetric extension (CSE), double-phase holograms (DPH), and phase correlation filtering (PCF) methods in retrieving the mode amplitudes of optical beams composed of either three, six, or ten modes. In order to provide a statistical analysis of the outcomes of these three methods, we propose a novel algorithm for the effective enumeration of mode parameters, which covers all possible beam modal compositions. Our results show that the best accuracy is achieved when the amplitude-phase mode distribution associated with multiple frequency PCF techniques is encoded by Jacobi–Anger expansion. Full article
Show Figures

Figure 1

12 pages, 4910 KiB  
Communication
Study on Plugging Material and Plugging Mechanism of Crude Oil Sand Water Filter Pipe
by Wenhui Zhang, Qingfeng Liu, Hengyu Chen, Huibin Sheng, Jingen Yan, Yongtao Gu, Xianqiang Huang and Bingchuan Yang
Water 2023, 15(21), 3714; https://doi.org/10.3390/w15213714 - 24 Oct 2023
Viewed by 1746
Abstract
In order to develop the biological self-cleaning anti-clogging high-permeability sand filter tube, it is very important to analyze the plugging material and plugging mechanism of crude oil sand water filter. Under laboratory conditions, based on vacuum drying, condensation reflux, chromatographic separation, and other [...] Read more.
In order to develop the biological self-cleaning anti-clogging high-permeability sand filter tube, it is very important to analyze the plugging material and plugging mechanism of crude oil sand water filter. Under laboratory conditions, based on vacuum drying, condensation reflux, chromatographic separation, and other technologies, the plugging components were preliminarily analyzed. The plugging components were analyzed via XRD, infrared analysis, 1H NMR, and 13C NMR. Through analysis and testing, XRD results showed that the solid compositions were clay and sand grains. Meanwhile, the infrared analysis, 1H NMR, and 13C NMR demonstrated that the main components of the plug are asphaltene crude oil, and the proportion of aromatic components and saturated components is close. Full article
(This article belongs to the Special Issue Science and Technology for Water Purification)
Show Figures

Graphical abstract

12 pages, 2890 KiB  
Article
Streamflow Reconstructions Using Tree-Ring-Based Paleo Proxies for the Sava River Basin (Slovenia)
by Glenn Tootle, Abdoul Oubeidillah, Emily Elliott, Giuseppe Formetta and Nejc Bezak
Hydrology 2023, 10(7), 138; https://doi.org/10.3390/hydrology10070138 - 28 Jun 2023
Cited by 8 | Viewed by 2259
Abstract
The Sava River Basin (SRB) extends across six countries (Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Albania, and Montenegro) and is a major tributary of the Danube River (DR). The Sava River (SR) originates in the alpine region of Slovenia, and, in support of [...] Read more.
The Sava River Basin (SRB) extends across six countries (Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Albania, and Montenegro) and is a major tributary of the Danube River (DR). The Sava River (SR) originates in the alpine region of Slovenia, and, in support of a Slovenian government initiative to increase clean, sustainable energy, multiple hydropower facilities have been constructed within the past ~20 years. Given the importance of this river system for varying demands, including energy production, information about past (paleo) drought and pluvial periods would provide important information to water managers and planners. Seasonal (April–May–June–July–August–September—AMJJAS) streamflow data were obtained for two SRB gauges (Jesenice and Catez) in Slovenia. The Jesenice gauge is in the extreme headwaters of the SR, upstream of any major water control structures, and is considered an unimpaired (minimal anthropogenic influence) gauge. The Catez gauge is located on the SR near the Slovenia–Croatia border, thus providing an estimate of streamflow leaving Slovenia (entering Croatia). The Old World Drought Atlas (OWDA) provides an annual June–July–August (JJA) self-calibrating Palmer Drought Severity Index (scPDSI) derived from 106 tree-ring chronologies for 5414 grid points across Europe from 0 to 2012 AD. In lieu of tree-ring chronologies, this dataset was used as a proxy to reconstruct (for ~2000 years) seasonal streamflow. Prescreening methods included the correlation and temporal stability of seasonal streamflow and scPDSI cells. The retained scPDSI cells were then used as predictors (independent variables) to reconstruct streamflow (predictive and/or dependent variables) in regression-based models. This resulted in highly skillful reconstructions of SRB seasonal streamflow from 0 to 2012 AD. The reconstructions were evaluated, and both low flow (i.e., drought) and high flow (i.e., pluvial) periods were identified for various filters (5-year to 30-year). When evaluating the most recent ~20 years (2000 to present), multiple low-flow (drought) periods were identified. For various filters (5-year to 15-year), the 2003 end-year consistently ranked as one of the lowest periods, while the 21-year period ending in 2012 was the lowest flow period in the ~2000-year reconstructed-observed-historic period of record. The ~30-year period ending in 2020 was the lowest flow period since the early 6th century. A decrease in pluvial (wet) periods was identified in the observed-historic record when compared to the paleo record, again confirming an apparent decline in streamflow. Given the increased activities (construction of water control structures) impacting the Sava River, the results provide important information to water managers and planners. Full article
Show Figures

Figure 1

21 pages, 4064 KiB  
Article
Reducing Octane Number Loss in Gasoline Refining Process by Using the Improved Sparrow Search Algorithm
by Jian Chen, Jiajun Zhu, Xu Qin and Wenxiang Xie
Sustainability 2023, 15(8), 6571; https://doi.org/10.3390/su15086571 - 13 Apr 2023
Viewed by 2387
Abstract
Gasoline is the primary fuel used in small cars, and the exhaust emissions from gasoline combustion have a significant impact on the atmosphere. Efforts to clean up gasoline have therefore focused primarily on reducing the olefin and sulfur content of gasoline, while maintaining [...] Read more.
Gasoline is the primary fuel used in small cars, and the exhaust emissions from gasoline combustion have a significant impact on the atmosphere. Efforts to clean up gasoline have therefore focused primarily on reducing the olefin and sulfur content of gasoline, while maintaining as much of the octane content as possible. With the aim of minimizing the loss of octane, this study investigated various machine learning algorithms to identify the best self-fitness function. An improved octane loss optimization model was developed, and the best octane loss calculation algorithm was identified. Firstly, the operational and non-operational variables were separated in the data pre-processing section, and the variables were then filtered using the random forest method and the grey correlation degree, respectively. Secondly, octane loss prediction models were built using four different machine learning techniques: back propagation (BP), radial basis function (RBF), ensemble learning representing extreme gradient boosting (XGboost) and support vector regression (SVR). The prediction results show that the XGboost model is optimal. Finally, taking the minimum octane loss as the optimization object and a sulfur content of less than 5µg/g as the constraint, an octane loss optimization model was established. The XGboost prediction model trained above as the fitness function was substituted into the genetic algorithm (GA), sparrow search algorithm (SSA), particle swarm optimization (PSO) and the grey wolf optimization (GWO) algorithm, respectively. The optimization results of these four types of algorithms were compared. The findings demonstrate that among the nine randomly selected sample points, SSA outperforms all other three methods with respect to optimization stability and slightly outperforms them with respect to optimization accuracy. For the RON loss, 252 out of 326 samples (about 77% of the samples) reached 30%, which is better than the optimization results published in the previous literature. Full article
Show Figures

Figure 1

14 pages, 10639 KiB  
Article
Restoration of a Textile Artefact: A Comparison of Cleaning Procedures Applied to a Historical Tapestry from the Quirinale Palace (Rome)
by Vittoria Guglielmi, Valeria Comite, Chiara Andrea Lombardi, Andrea Bergomi, Elisabetta Boanini, Roberto Bonomi, Elisa Monfasani, Letizia Sassi, Mattia Borelli and Paola Fermo
Appl. Sci. 2023, 13(4), 2669; https://doi.org/10.3390/app13042669 - 19 Feb 2023
Cited by 2 | Viewed by 2472
Abstract
The cleaning of textile artefacts and in particular historical tapestries is generally carried out using standard methods. Different cleaning procedures, including a new method based on a hydro-aspiration mechanism, recently developed by restorers with the aim of improving the efficiency of the cleaning [...] Read more.
The cleaning of textile artefacts and in particular historical tapestries is generally carried out using standard methods. Different cleaning procedures, including a new method based on a hydro-aspiration mechanism, recently developed by restorers with the aim of improving the efficiency of the cleaning system, were applied to a historical tapestry belonging to the lower edge of one of the tapestries of the “Ulysses Stories” series exhibited at the Quirinale Palace (Rome). The tapestry was made of wool and silk and has precious decorations made of metal yarns, which are particularly fragile. The new cleaning system was compared with the traditional methods commonly used by restorers for tapestry cleaning. For this purpose, the quantity and chemical composition of the particles removed and collected on quartz fibre filters by applying the different cleaning systems, were estimated by means of analytical techniques, such as IC (Ion Chromatography) for the quantification of the ionic species collected into the rinsing water, the TOT (Thermal Optical Transmittance) method for the quantification of the carbonaceous particles and SEM-EDX (Scanning Electron Microscopy coupled to Energy Dispersive X-ray Spectroscopy) for yarn morphological characterization and elemental analysis of the deposited particles. The objective of this study is to identify the correct cleaning method to apply to the polymaterial tapestry and, in particular, to the gilded silver and gold metallic yarns, whose conservation state requires the preservation of the “self-protection” patina necessary for the future exhibition inside the Quirinale Palace. The new hydro-aspiration method was found to be more efficient in removing dirt and preserving the structure of the metallic threads being in this way less invasive in detaching the fragile surface patina and at the same time more effective in removing dirt. Full article
Show Figures

Figure 1

Back to TopTop