Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,226)

Search Parameters:
Keywords = interval transformation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3223 KB  
Article
Oxidative Degradation Mechanism of Zinc White Acrylic Paint: Uneven Distribution of Damage Under Artificial Aging
by Mais Khadur, Victor Ivanov, Artem Gusenkov, Alexander Gulin, Marina Soloveva, Yulia Diakonova, Yulian Khalturin and Victor Nadtochenko
Heritage 2025, 8(10), 419; https://doi.org/10.3390/heritage8100419 - 3 Oct 2025
Abstract
Accelerated artificial aging of zinc oxide (ZnO)-based acrylic artists’ paint, filled with calcium carbonate (CaCO3) as an extender, was carried out for a total of 1963 h (~8 × 107 lux·h), with assessments at specific intervals. The total color difference [...] Read more.
Accelerated artificial aging of zinc oxide (ZnO)-based acrylic artists’ paint, filled with calcium carbonate (CaCO3) as an extender, was carried out for a total of 1963 h (~8 × 107 lux·h), with assessments at specific intervals. The total color difference ΔE* was <2 (CIELab-76 system) over 1725 h of aging, while the human eye notices color change at ΔE* > 2. Oxidative degradation of organic components in the paint to form volatile products was revealed by attenuated total reflectance–Fourier transform infrared (ATR-FTIR) spectroscopy, micro-Raman spectroscopy, and scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDS). It appears that deep oxidation of organic intermediates and volatilization of organic matter may be responsible for the relatively small value of ΔE* color difference during aging of the samples. To elucidate the degradation pathways, principal component analysis (PCA) was applied to the spectral data, revealing: (1) the catalytic role of ZnO in accelerating photodegradation, (2) the Kolbe photoreaction, (3) the decomposition of the binder to form volatile degradation products, and (4) the relative photoinactivity of CaCO3 compared with ZnO, showing slower degradation in areas with a higher CaCO3 content compared with those dominated by ZnO. These results provide fundamental insights into formulation-specific degradation processes, offering practical guidance for the development of more durable artist paints and conservation strategies for acrylic artworks. Full article
Show Figures

Figure 1

24 pages, 462 KB  
Article
New Results on the Computation of Periods of IETs
by Antonio Linero Bas and Gabriel Soler López
Mathematics 2025, 13(19), 3175; https://doi.org/10.3390/math13193175 - 3 Oct 2025
Abstract
We introduce a novel technique for computing the periods of (d,k)-IETs based on Rauzy induction R. Specifically, we establish a connection between the set of periods of an interval exchange transformation (IET) T and those of the [...] Read more.
We introduce a novel technique for computing the periods of (d,k)-IETs based on Rauzy induction R. Specifically, we establish a connection between the set of periods of an interval exchange transformation (IET) T and those of the IET T obtained either by applying the Rauzy operator R to T or by considering the Poincaré first return map. Rauzy matrices play a central role in this correspondence whenever T lies in the domain of R (Theorem 4). Furthermore, Theorem 6 addresses the case when T is not in the domain of R, while Theorem 5 deals with IETs having associated reducible permutations. As an application, we characterize the set of periods of oriented 3-IETs (Theorem 8), and we also propose a general framework for studying the periods of (d,k)-IETs. Our approach provides a systematic method for determining the periods of non-transitive IETs. In general, given an IET with d discontinuities, if Rauzy induction allows us to descend to another IET whose periodic components are already known, then the main theorems of this paper can be applied to recover the set of periods of the original IET. This method has been also applied to obtain the set of periods of all (2,k)-IETs and some (3,k)-IETs, k1. Several open problems are presented at the end of the paper. Full article
(This article belongs to the Section C2: Dynamical Systems)
Show Figures

Figure 1

27 pages, 2395 KB  
Article
Revealing Short-Term Memory Communication Channels Embedded in Alphabetical Texts: Theory and Experiments
by Emilio Matricciani
Information 2025, 16(10), 847; https://doi.org/10.3390/info16100847 - 30 Sep 2025
Abstract
The aim of the present paper is to further develop a theory on the flow of linguistic variables making a sentence, namely, the transformation of (a) characters into words; (b) words into word intervals; and (c) word intervals into sentences. The relationship between [...] Read more.
The aim of the present paper is to further develop a theory on the flow of linguistic variables making a sentence, namely, the transformation of (a) characters into words; (b) words into word intervals; and (c) word intervals into sentences. The relationship between two linguistic variables is studied as a communication channel whose performance is determined by the slope of their regression line and by their correlation coefficient. The mathematical theory is applicable to any field/specialty in which a linear relationship holds between two variables. The signal-to-noise ratio Γ is a figure of merit of a channel being “deterministic”, i.e., a channel in which the scattering of the data around the regression line is negligible. The larger Γ is, the more the channel is “deterministic”. In conclusion, humans have invented codes whose sequences of symbols that make words cannot vary very much when indicating single physical or mental objects of their experience (larger Γ). On the contrary, large variability (smaller Γ) is achieved by introducing interpunctions to make word intervals, and word intervals make sentences that communicate concepts. This theory can inspire new research lines in cognitive science research. Full article
25 pages, 730 KB  
Review
Treatment-Related Adverse Events in Individuals with BRAF-Mutant Cutaneous Melanoma Treated with BRAF and MEK Inhibitors: A Systematic Review and Meta-Analysis
by Silvia Belloni, Rosamaria Virgili, Rosario Caruso, Cristina Arrigoni, Arianna Magon, Gennaro Rocco and Maddalena De Maria
Cancers 2025, 17(19), 3152; https://doi.org/10.3390/cancers17193152 - 28 Sep 2025
Abstract
Objectives: We conducted a systematic review of clinical trials and case reports analyzing the safety of the currently approved BRAF and MEK inhibitors in adults with cutaneous melanoma (CM), and a meta-analysis to estimate the pooled prevalence of treatment-related adverse events (TRAEs). [...] Read more.
Objectives: We conducted a systematic review of clinical trials and case reports analyzing the safety of the currently approved BRAF and MEK inhibitors in adults with cutaneous melanoma (CM), and a meta-analysis to estimate the pooled prevalence of treatment-related adverse events (TRAEs). Methods: We systematically searched six databases for studies published since 2009. The TRAE absolute frequencies reported in primary studies were aggregated using the Metaprop command in Stata 17, which calculates 95% confidence intervals (CIs) incorporating the Freeman–Tukey double arcsine transformation of proportions to stabilize variances within random-effect models. Methodological quality was assessed using the RoB 2 tool for randomized controlled trials (RCTs) and the ROBINS-I tool for non-randomized studies. Results: Twelve RCTs, thirteen prospective cohort studies (PCSs), and ten case reports were included. Meta-analysis was feasible for two regimens: vemurafenib 960 mg monotherapy and dabrafenib 150 mg twice daily plus trametinib 1–2 mg daily. The most common TRAEs during vemurafenib treatment were musculoskeletal and connective-tissue disorders (24%, 95% CI: 6–41%, p = 0.01), with arthralgia as the most prevalent (44%, 95% CI: 29–59%, p < 0.001), followed by rash (39%, 95% CI: 22–56%, p < 0.001). The most common TRAEs during dabrafenib plus trametinib were constitutional toxicities (classified in CTCAE as ‘General disorders and administration site conditions’; 25%, 95% CI: 14–37%, p < 0.001), with fatigue as the most prevalent (47%, 95% CI: 38–56%, p < 0.001), followed by pyrexia (40%, 95% CI: 26–54%, p < 0.001). Squamous cell carcinoma and keratoacanthoma were among the most frequent grade ≥ 3 cutaneous adverse events observed with vemurafenib therapy. Conclusions: Although additional large-scale studies are needed to corroborate these findings, each treatment has a distinct toxicity profile that should be considered when developing personalized risk-stratified treatment plans and in guiding healthcare resource allocation in melanoma care. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

24 pages, 1904 KB  
Article
Watermarking Fine-Tuning Datasets for Robust Provenance
by Ivo Gergov and Georgi Tsochev
Appl. Sci. 2025, 15(19), 10457; https://doi.org/10.3390/app151910457 - 26 Sep 2025
Abstract
Large Language Models are often fine-tuned on proprietary corpora, motivating reliable provenance signals. A corpus-level watermark method is proposed for fine-tuning datasets that survives training and common text transformations. The method subtly biases synonym choices according to a secret key (PRF) and encodes [...] Read more.
Large Language Models are often fine-tuned on proprietary corpora, motivating reliable provenance signals. A corpus-level watermark method is proposed for fine-tuning datasets that survives training and common text transformations. The method subtly biases synonym choices according to a secret key (PRF) and encodes a multi-bit payload with an error-correcting code, enabling keyed detection via a generalized likelihood ratio test with permutation-calibrated p-values. For short offline passages (~100 words), the channel is valid but statistically underpowered: the average density is ~0.0165, and the median p-value is close to 1.0. In generative tests with Mistral 7B across 12 configurations and 12,720 texts, 0.00% detection was observed at very high quality (~99.8%). As limited base cases, positive detection was reported for other setups: 8.9% (offline), 5.0% (Mistral 7B), and 3.0% (Llama2-13B). A permutation test (R = 5000), confidence intervals, and power analysis were added. Quality impact statements were refined, with “minimal impact” used instead of “imperceptible.” In this study, limitations and ethical use are discussed, and directions for stronger semantic channels and model-based detectors are outlined. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 4045 KB  
Article
Analysis and Optimization of Dynamic Characteristics of Primary Frequency Regulation Under Deep Peak Shaving Conditions for Industrial Steam Extraction Heating Thermal Power Units
by Libin Wen, Jinji Xi, Hong Hu and Zhiyuan Sun
Processes 2025, 13(10), 3082; https://doi.org/10.3390/pr13103082 - 26 Sep 2025
Abstract
This study investigates the primary frequency regulation dynamic characteristics of industrial steam extraction turbine units under deep peak regulation conditions. A high-fidelity integrated dynamic model was established, incorporating the governor system, steam turbine with extraction modules, and interconnected pipeline dynamics. Through comparative simulations [...] Read more.
This study investigates the primary frequency regulation dynamic characteristics of industrial steam extraction turbine units under deep peak regulation conditions. A high-fidelity integrated dynamic model was established, incorporating the governor system, steam turbine with extraction modules, and interconnected pipeline dynamics. Through comparative simulations and experimental validation, the model demonstrates high accuracy in replicating real-unit responses to frequency disturbances. For the power grid system in this study, the frequency disturbance mainly comes from three aspects: first, the power imbalance formed by the random mutation of the load side and the intermittence of new energy power generation; second, transformation of the energy structure directly reduces the available frequency modulation resources; third, the system-equivalent inertia collapse effect caused by the integration of high permeability new energy; the rotational inertia provided by the traditional synchronous unit is significantly reduced. In the cogeneration unit and its control system in Guangxi involved in this article, key findings reveal that increased peak regulation depth (30~50% rated power) exacerbates nonlinear fluctuations. This is due to boiler combustion stability thresholds and steam pressure variations. Key parameters—dead band, power limit, and droop coefficient—have coupled effects on performance. Specifically, too much dead band (>0.10 Hz) reduces sensitivity; likewise, too high a power limit (>4.44%) leads to overshoot and slow recovery. The robustness of parameter configurations is further validated under source-load random-intermittent coupling disturbances, highlighting enhanced anti-interference capability. By constructing a coordinated control model of primary frequency modulation, the regulation strategy of boiler and steam turbine linkage is studied, and the optimization interval of frequency modulation dead zone, adjustment coefficient, and frequency modulation limit parameters are quantified. Based on the sensitivity theory, the dynamic influence mechanism of the key control parameters in the main module is analyzed, and the degree of influence of each parameter on the frequency modulation performance is clarified. This research provides theoretical guidance for optimizing frequency regulation strategies in coal-fired units integrated with renewable energy systems. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

24 pages, 1093 KB  
Article
An Interval Analysis Method for Uncertain Multi-Objective Optimization of Solid Propellant Formulations
by Jiaren Ren, Ran Wei, Futing Bao and Xiao Hou
Aerospace 2025, 12(10), 865; https://doi.org/10.3390/aerospace12100865 - 25 Sep 2025
Abstract
To obtain propellant formulations with superior comprehensive and robustness performance, the study establishes a multi-objective optimization model that accounts for uncertainties. The model adopts a bi-layer structure. The inner layer computes performance bounds to construct uncertainty intervals, which are subsequently transformed into deterministic [...] Read more.
To obtain propellant formulations with superior comprehensive and robustness performance, the study establishes a multi-objective optimization model that accounts for uncertainties. The model adopts a bi-layer structure. The inner layer computes performance bounds to construct uncertainty intervals, which are subsequently transformed into deterministic performance via interval order relations. The outer layer optimizes component mass fractions using MOEA/D (Multi-objective Evolutionary Algorithm Based on Decomposition) to maximize the deterministic performance. The study leverages Large Language Models (LLMs) as pre-trained optimizers to automate the operator design of MOEA/D. Designers can identify formulations that satisfy the performance requirements and robustness criteria by adjusting uncertainty levels and MOEA/D weight coefficients. The results on ZDTs and UFs demonstrate that MOEA/D-LLM achieves approximately a 4.0% improvement in hypervolume values compared to MOEA/D. Additionally, the NEPE propellant optimization case shows that MOEA/D-LLM improves the computational speed by about 13.05% and enhances hypervolume values by around 2.7% compared to MOEA/D. The specific impulse increases by 1.11%, the generation of aluminum oxide and hydrogen chloride decreases by approximately 18.43% and 16.40%, respectively, and the impact sensitivity is reduced by about 1.67%. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

11 pages, 898 KB  
Article
Comparison of Aerosol Generation Between Bag Valve and Chest Compression-Synchronized Ventilation During Simulated Cardiopulmonary Resuscitation
by Young Taeck Oh, Choung Ah Lee, Daun Choi and Hang A. Park
J. Clin. Med. 2025, 14(19), 6790; https://doi.org/10.3390/jcm14196790 - 25 Sep 2025
Abstract
Background: Cardiopulmonary resuscitation can generate aerosols, potentially exposing healthcare workers (HCWs) to infection. Bag valve ventilation (BV) is widely used but is prone to aerosol dispersion, whereas chest compression-synchronized ventilation (CCSV) maintains a closed respiratory circuit. In this study, we compared aerosol [...] Read more.
Background: Cardiopulmonary resuscitation can generate aerosols, potentially exposing healthcare workers (HCWs) to infection. Bag valve ventilation (BV) is widely used but is prone to aerosol dispersion, whereas chest compression-synchronized ventilation (CCSV) maintains a closed respiratory circuit. In this study, we compared aerosol generation between CCSV and BV during chest compressions following endotracheal intubation in a simulated resuscitation setting. Methods: In a randomized crossover design, 12 sessions each of CCSV and BV were conducted on an intubated manikin undergoing mechanical chest compressions for 10 min. Aerosols with ≤5-μm diameter were generated using a saline nebulizer and measured every minute with a particle counter positioned 50 cm from the chest compression site. Bayesian linear regression of minute-by-minute log-transformed aerosol particle counts was used to estimate group differences, yielding posterior means, 95% credible intervals, and posterior probabilities. Results: The aerosol particle counts increased during the initial 3 min with the use of both methods. Thereafter, the aerosol particle counts with CCSV stabilized, whereas those with BV continued to increase. From 4 to 10 min, the posterior probability that CCSV generated fewer particles exceeded 0.98, peaking at 9 min. Both peak and time-averaged log-transformed aerosol particle counts were significantly lower with CCSV than with BV (p = 0.010 and p = 0.020, respectively). Conclusions: In this simulation, CCSV generated significantly fewer aerosols than BV did during chest compressions, with differences emerging after 4 min and persisting thereafter. Thus, CCSV may reduce aerosol exposure of HCWs, supporting its early implementation during resuscitation in infectious disease settings. Full article
Show Figures

Figure 1

21 pages, 2463 KB  
Article
Probabilistic HVAC Load Forecasting Method Based on Transformer Network Considering Multiscale and Multivariable Correlation
by Tingzhe Pan, Zean Zhu, Hongxuan Luo, Chao Li, Xin Jin, Zijie Meng and Xinlei Cai
Energies 2025, 18(19), 5073; https://doi.org/10.3390/en18195073 - 24 Sep 2025
Viewed by 125
Abstract
Accurate load forecasting for community-level heating, ventilation, and air conditioning (HVAC) plays an important role in determining an efficient strategy for demand response (DR) and the operation of the power grid. However, community-level HVAC includes various building-level HVACs, whose usage patterns and standard [...] Read more.
Accurate load forecasting for community-level heating, ventilation, and air conditioning (HVAC) plays an important role in determining an efficient strategy for demand response (DR) and the operation of the power grid. However, community-level HVAC includes various building-level HVACs, whose usage patterns and standard parameters vary, causing the challenge of load forecasting. To this end, a novel deep learning model, multiscale and cross-variable transformer (MSCVFormer), is proposed to achieve accurate community-level HVAC probabilistic load forecasting by capturing the various influences of multivariables on the load pattern, providing effective information for the grid operators to develop DR and operation strategies. This approach is combined with the multiscale attention (MSA) and cross-variable attention (CVA) mechanism, capturing the complex temporal patterns of the aggregated load. Specifically, by embedding the time series decomposition into the self-attention mechanism, MSA enables the model to capture the critical features of time series while considering the correlation between multiscale time series. Then, CVA calculates the correlations between the exogenous variable and aggregated load, explicitly utilizing the exogenous variables to enhance the model’s understanding of the temporal pattern. This differs from the usual methods, which do not fully consider the relationship between the exogenous variable and aggregated load. To test the effectiveness of the proposed method, two datasets from Germany and China are used to conduct the experiment. Compared to the benchmarks, the proposed method achieves outperforming probabilistic load forecasting results, where the prediction interval coverage probability (PICP) deviation with the nominal coverage and prediction interval normalized averaged width (PINAW) are reduced by 46.7% and 5.25%, respectively. Full article
(This article belongs to the Topic Advances in Power Science and Technology, 2nd Edition)
Show Figures

Figure 1

11 pages, 1206 KB  
Article
Analysis of Strain Hardening Stages of AISI 316 LN Stainless Steel Under Cold Rolling Conditions
by Tibor Kvačkaj, Jana Bidulská, Ľuboš Kaščák, Alica Fedoríková and Róbert Bidulský
Metals 2025, 15(10), 1060; https://doi.org/10.3390/met15101060 - 23 Sep 2025
Viewed by 132
Abstract
In the present investigation, stress–strain curves and strain hardening rates on samples rolled at ambient temperature with thickness reductions of 0%, 10%, 30%, and 50% were studied. On the processed samples, static tensile tests at ambient temperature were performed. Transformation of the engineering [...] Read more.
In the present investigation, stress–strain curves and strain hardening rates on samples rolled at ambient temperature with thickness reductions of 0%, 10%, 30%, and 50% were studied. On the processed samples, static tensile tests at ambient temperature were performed. Transformation of the engineering stress–strain curves to true stress–strain curves and their numerical processing by first derivation (θ = dσ/dε) was carried out. Dependencies θ = f(εT) characterizing the strain hardening rates were derived. From the curves and the true stress–strain and strain hardening rates, the three stages describing different rates of strain hardening were identified. A rapid increase in true stress and a rapid decrease in the strain hardening rate in Stage I were observed. Quasi-linear dependencies with an increase in true stress but with a slow, gradual decline in the strain hardening rate in Stage II were obtained. Slowly increasing true strains, accompanied by a decrease in strain hardening rates and their transition to softening, led to the formation of plastic instability and necking in Stage III. The endpoints of the strain hardening rate depending on the cold rolling deformations lie in the following intervals: θStage I ∈ <1904;3032> MPa, θStage II ∈ <906;−873> MPa, θStage III ∈ <−144;−11,979> MPa. While in Stage I and Stage II, the plastic deformation mechanism is predominantly dislocation slip, in Stage III, the plastic deformation mechanism is twinning accompanied by dislocation slip. Full article
(This article belongs to the Special Issue Numerical Simulation and Experimental Research of Metal Rolling)
Show Figures

Figure 1

33 pages, 1023 KB  
Article
Forecasting Renewable Power Generation by Employing a Probabilistic Accumulation Non-Homogeneous Grey Model
by Peng Zhang, Jinsong Hu, Kelong Zheng, Wenqing Wu and Xin Ma
Energies 2025, 18(18), 5037; https://doi.org/10.3390/en18185037 - 22 Sep 2025
Viewed by 132
Abstract
Accurately predicting annual renewable power generation is critical for advancing energy structure transformation, ensuring energy security, and fostering sustainable development. In this study, a probabilistic non-homogeneous grey model (PNGM) is proposed to address this forecasting challenge. Firstly, the proposed model is constructed by [...] Read more.
Accurately predicting annual renewable power generation is critical for advancing energy structure transformation, ensuring energy security, and fostering sustainable development. In this study, a probabilistic non-homogeneous grey model (PNGM) is proposed to address this forecasting challenge. Firstly, the proposed model is constructed by integrating a Probabilistic Accumulation Generation Operator with the classical non-homogeneous grey model. Secondly, the Whale Optimization Algorithm is utilized to tune the parameters of the operator, thereby enhancing the extraction of valid information required for modeling. Furthermore, the superiority of the new model in information extraction and predictive performance is validated using synthetic datasets. Finally, it is applied to forecast renewable power generation in the United States, Russia, and India. The result exhibits significantly superior performance compared to the comparative models. Additionally, this study provides projections of renewable power generation for the United States, Russia, and India from 2025 to 2030, and the uncertainty intervals of the predicted values are estimated using the Bootstrap method. These results can provide reliable decision support for energy sectors and policymakers. Full article
(This article belongs to the Special Issue The Future of Renewable Energy: 2nd Edition)
Show Figures

Figure 1

18 pages, 1694 KB  
Article
FAIR-Net: A Fuzzy Autoencoder and Interpretable Rule-Based Network for Ancient Chinese Character Recognition
by Yanling Ge, Yunmeng Zhang and Seok-Beom Roh
Sensors 2025, 25(18), 5928; https://doi.org/10.3390/s25185928 - 22 Sep 2025
Viewed by 150
Abstract
Ancient Chinese scripts—including oracle bone carvings, bronze inscriptions, stone steles, Dunhuang scrolls, and bamboo slips—are rich in historical value but often degraded due to centuries of erosion, damage, and stylistic variability. These issues severely hinder manual transcription and render conventional OCR techniques inadequate, [...] Read more.
Ancient Chinese scripts—including oracle bone carvings, bronze inscriptions, stone steles, Dunhuang scrolls, and bamboo slips—are rich in historical value but often degraded due to centuries of erosion, damage, and stylistic variability. These issues severely hinder manual transcription and render conventional OCR techniques inadequate, as they are typically trained on modern printed or handwritten text and lack interpretability. To tackle these challenges, we propose FAIR-Net, a hybrid architecture that combines the unsupervised feature learning capacity of a deep autoencoder with the semantic transparency of a fuzzy rule-based classifier. In FAIR-Net, the deep autoencoder first compresses high-resolution character images into low-dimensional, noise-robust embeddings. These embeddings are then passed into a Fuzzy Neural Network (FNN), whose hidden layer leverages Fuzzy C-Means (FCM) clustering to model soft membership degrees and generate human-readable fuzzy rules. The output layer uses Iteratively Reweighted Least Squares Estimation (IRLSE) combined with a Softmax function to produce probabilistic predictions, with all weights constrained as linear mappings to maintain model transparency. We evaluate FAIR-Net on CASIA-HWDB1.0, HWDB1.1, and ICDAR 2013 CompetitionDB, where it achieves a recognition accuracy of 97.91%, significantly outperforming baseline CNNs (p < 0.01, Cohen’s d > 0.8) while maintaining the tightest confidence interval (96.88–98.94%) and lowest standard deviation (±1.03%). Additionally, FAIR-Net reduces inference time to 25 s, improving processing efficiency by 41.9% over AlexNet and up to 98.9% over CNN-Fujitsu, while preserving >97.5% accuracy across evaluations. To further assess generalization to historical scripts, FAIR-Net was tested on the Ancient Chinese Character Dataset (9233 classes; 979,907 images), achieving 83.25% accuracy—slightly higher than ResNet101 but 2.49% lower than SwinT-v2-small—while reducing training time by over 5.5× compared to transformer-based baselines. Fuzzy rule visualization confirms enhanced robustness to glyph ambiguities and erosion. Overall, FAIR-Net provides a practical, interpretable, and highly efficient solution for the digitization and preservation of ancient Chinese character corpora. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

32 pages, 1238 KB  
Article
GRU-BERT for NILM: A Hybrid Deep Learning Architecture for Load Disaggregation
by Annysha Huzzat, Ahmed S. Khwaja, Ali A. Alnoman, Bhagawat Adhikari, Alagan Anpalagan and Isaac Woungang
AI 2025, 6(9), 238; https://doi.org/10.3390/ai6090238 - 22 Sep 2025
Viewed by 308
Abstract
Non-Intrusive Load Monitoring (NILM) aims to disaggregate a household’s total aggregated power consumption into appliance-level usage, enabling intelligent energy management without the need for intrusive metering. While deep learning has improved NILM significantly, existing NILM models struggle to capture load patterns across both [...] Read more.
Non-Intrusive Load Monitoring (NILM) aims to disaggregate a household’s total aggregated power consumption into appliance-level usage, enabling intelligent energy management without the need for intrusive metering. While deep learning has improved NILM significantly, existing NILM models struggle to capture load patterns across both longer time intervals and subtle timings for appliances involving brief or overlapping usage patterns. In this paper, we propose a novel GRU+BERT hybrid architecture, exploring both unidirectional (GRU+BERT) and bidirectional (Bi-GRU+BERT) variants. Our model combines Gated Recurrent Units (GRUs) to capture sequential temporal dependencies with Bidirectional Encoder Representations from Transformers (BERT), which is a transformer-based model that captures rich contextual information across the sequence. The bidirectional variant (Bi-GRU+BERT) processes input sequences in both forward (past to future) and backward (future to past) directions, enabling the model to learn relationships between power consumption values at different time steps more effectively. The unidirectional variant (GRU+BERT) provides an alternative suited for appliances with structured, sequential multi-phase usage patterns, such as dishwashers. By placing the Bi-GRU or GRU layer before BERT, our models first capture local time-based load patterns and then use BERT’s self-attention to understand the broader contextual relationships. This design addresses key limitations of both standalone recurrent and transformer-based models, offering improved performance on transient and irregular appliance loads. Evaluated on the UK-DALE and REDD datasets, the proposed Bi-GRU+BERT and GRU+BERT models show competitive performance compared to several state-of-the-art NILM models while maintaining a comparable model size and training time, demonstrating their practical applicability for real-time energy disaggregation, including potential edge and cloud deployment scenarios. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

11 pages, 459 KB  
Article
Elevated Serum Trimethylamine N-Oxide Predicts Impaired Vascular Reactivity in Patients with Hypertension
by I-Min Su, Ji-Hung Wang, Chin-Hung Liu and Bang-Gee Hsu
Diagnostics 2025, 15(18), 2400; https://doi.org/10.3390/diagnostics15182400 - 20 Sep 2025
Viewed by 230
Abstract
Background/Objectives: Trimethylamine N-oxide (TMAO), a gut microbiota-derived metabolite influenced by diet, has been linked to cardiovascular disease. Endothelial dysfunction, an early sign of vascular damage, is common in hypertension. This study examined the relationship between serum TMAO levels and endothelial function, assessed [...] Read more.
Background/Objectives: Trimethylamine N-oxide (TMAO), a gut microbiota-derived metabolite influenced by diet, has been linked to cardiovascular disease. Endothelial dysfunction, an early sign of vascular damage, is common in hypertension. This study examined the relationship between serum TMAO levels and endothelial function, assessed by the vascular reactivity index (VRI), in patients with hypertension. Methods: In total, 110 patients with hypertension were enrolled. Fasting serum TMAO was measured using high-performance liquid chromatography–mass spectrometry. Endothelial function was evaluated via digital thermal monitoring, with VRI categorized as good (>2.0), intermediate (1.0–1.9), or poor (<1.0). Results: Of the participants, 10 (9.1%) exhibited poor vascular reactivity, 57 (51.8%) had intermediate reactivity, and 43 (39.1%) exhibited good vascular reactivity. Poor reactivity correlated with older age (p = 0.010), higher total cholesterol (p = 0.007), low-density lipoprotein cholesterol (p = 0.009), and higher TMAO levels (p < 0.001). In multivariate forward stepwise linear regression, the log-transformed TMAO level (log-TMAO) remained independently and inversely associated with VRI (p < 0.001). Logistic regression analyses demonstrated that elevated TMAO concentrations were significantly associated with an increased likelihood of vascular reactivity dysfunction (intermediate and poor groups combined; odds ratio [OR] = 1.10, 95% confidence interval [CI]: 1.047–1.155; p < 0.001) and, in particular, with poor vascular reactivity (OR = 1.58, 95% CI: 1.002–2.492; p = 0.049). Conclusions: Elevated serum TMAO is independently associated with endothelial dysfunction in hypertension. Full article
(This article belongs to the Special Issue New Advances in Cardiovascular Risk Prediction)
Show Figures

Figure 1

17 pages, 3464 KB  
Article
Advanced Spectroscopic and Thermoanalytical Quantification of LLDPE in Mealworm Frass: A Multitechnique Approach
by Encarnación Martínez-Sabater, Rosa Peñalver, Margarita Ros, José A. Pascual, Raul Moral and Frutos C. Marhuenda-Egea
Appl. Sci. 2025, 15(18), 10244; https://doi.org/10.3390/app151810244 - 20 Sep 2025
Viewed by 180
Abstract
Plastic pollution from polyethylene-based materials is a critical environmental concern due to their high persistence. Here, we report the first proof-of-concept application of a multitechnique analytical framework for quantifying linear low-density polyethylene (LLDPE) in Tenebrio molitor frass. Artificially enriched frass–LLDPE mixtures were analyzed [...] Read more.
Plastic pollution from polyethylene-based materials is a critical environmental concern due to their high persistence. Here, we report the first proof-of-concept application of a multitechnique analytical framework for quantifying linear low-density polyethylene (LLDPE) in Tenebrio molitor frass. Artificially enriched frass–LLDPE mixtures were analyzed using thermogravimetric analysis (TGA), TGA coupled with Fourier-Transform Infrared Spectroscopy (FTIR) and Mass Spectrometry (MS), TGA under inert atmosphere, and solid-state 13C nuclear magnetic resonance spectroscopy with Cross-Polarization and Magic Angle Spinning (CP-MAS NMR) 13C CP-MAS NMR combined with interval Partial Least Squares (iPLS) modeling. Thermal methods provided insight into decomposition pathways but showed reduced specificity at <1% w/w due to matrix interference. CP-MAS NMR offered matrix-independent quantification, with characteristic signals in the 10–45 ppm region and a calculated LOD and LOQ of 0.173% and 0.525% w/w, respectively. The LOQ lies within the reported ingestion range for T. molitor (0.8–3.2% w/w in frass), confirming biological relevance. This validated workflow establishes CP-MAS NMR as the most robust tool for quantifying polyethylene residues in complex matrices and provides a foundation for in vivo biodegradation studies and environmental monitoring. Full article
Show Figures

Figure 1

Back to TopTop