Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (451)

Search Parameters:
Keywords = independent vector analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2261 KiB  
Article
Multilayer Perceptron Mapping of Subjective Time Duration onto Mental Imagery Vividness and Underlying Brain Dynamics: A Neural Cognitive Modeling Approach
by Matthew Sheculski and Amedeo D’Angiulli
Mach. Learn. Knowl. Extr. 2025, 7(3), 82; https://doi.org/10.3390/make7030082 - 13 Aug 2025
Viewed by 232
Abstract
According to a recent experimental phenomenology–information processing theory, the sensory strength, or vividness, of visual mental images self-reported by human observers reflects the intensive variation in subjective time duration during the process of generation of said mental imagery. The primary objective of this [...] Read more.
According to a recent experimental phenomenology–information processing theory, the sensory strength, or vividness, of visual mental images self-reported by human observers reflects the intensive variation in subjective time duration during the process of generation of said mental imagery. The primary objective of this study was to test the hypothesis that a biologically plausible essential multilayer perceptron (MLP) architecture can validly map the phenomenological categories of subjective time duration onto levels of subjectively self-reported vividness. A secondary objective was to explore whether this type of neural network cognitive modeling approach can give insight into plausible underlying large-scale brain dynamics. To achieve these objectives, vividness self-reports and reaction times from a previously collected database were reanalyzed using multilayered perceptron network models. The input layer consisted of six levels representing vividness self-reports and a reaction time cofactor. A single hidden layer consisted of three nodes representing the salience, task positive, and default mode networks. The output layer consisted of five levels representing Vittorio Benussi’s subjective time categories. Across different models of networks, Benussi’s subjective time categories (Level 1 = very brief, 2 = brief, 3 = present, 4 = long, 5 = very long) were predicted by visual imagery vividness level 1 (=no image) to 5 (=very vivid) with over 90% success in classification accuracy, precision, recall, and F1-score. This accuracy level was maintained after 5-fold cross validation. Linear regressions, Welch’s t-test for independent coefficients, and Pearson’s correlation analysis were applied to the resulting hidden node weight vectors, obtaining evidence for strong correlation and anticorrelation between nodes. This study successfully mapped Benussi’s five levels of subjective time categories onto the activation patterns of a simple MLP, providing a novel computational framework for experimental phenomenology. Our results revealed structured, complex dynamics between the task positive network (TPN), the default mode network (DMN), and the salience network (SN), suggesting that the neural mechanisms underlying temporal consciousness involve flexible network interactions beyond the traditional triple network model. Full article
Show Figures

Figure 1

15 pages, 65226 KiB  
Article
Optimization of Water Tank Shape in Terms of Firefighting Vehicle Stability
by Jaroslav Matej and Michaela Hnilicová
Appl. Syst. Innov. 2025, 8(4), 112; https://doi.org/10.3390/asi8040112 - 11 Aug 2025
Viewed by 96
Abstract
In this work we present the shape optimization of a 2000 L water tank placed behind the rear axle of a forestry skidder. The main criterion is the static stability of the vehicle. The purpose of the research is to decrease the impact [...] Read more.
In this work we present the shape optimization of a 2000 L water tank placed behind the rear axle of a forestry skidder. The main criterion is the static stability of the vehicle. The purpose of the research is to decrease the impact of the tank on stability of the vehicle. The stability is determined in the form of distances of vectors of a stability triangle and a gravity vector. The tank is divided into small elements and their impact on stability is evaluated independently. Then, the gravity vector, placed in the center of gravity of the vehicle with the tank, combines the gravities of the vehicle and the tank composed of as many elements as required for the desired volume. The Python 3.13 programming language is used to implement the solution. The results for various shapes of the tank are displayed in the form of heatmaps. A slope angle of 20 degrees is used for the analysis. The results show that the longitudinal or lateral stability can be improved by shape modifications of the tank. The most interesting output is the final shape of the tank that improves terrain accessibility of the vehicle. The optimization method is universal and can also be used for different vehicles, tank placements and also auxiliary devices added in general positions. Full article
Show Figures

Figure 1

35 pages, 1916 KiB  
Review
The Role of Geospatial Techniques for Renewable Hydrogen Value Chain: A Systematic Review of Current Status, Challenges and Future Developments
by Gustavo Hernández-Herráez, Néstor Velaz-Acera, Susana Del Pozo and Susana Lagüela
Appl. Sci. 2025, 15(16), 8777; https://doi.org/10.3390/app15168777 - 8 Aug 2025
Viewed by 209
Abstract
The clean energy transition has elevated renewable hydrogen as a key energy vector, yet challenges in cost-competitiveness and infrastructure planning persist. This study conducts a PRISMA-based systematic review of recent geospatial applications across the hydrogen value chain—production, storage, transport, and end-use. Bibliometric analysis [...] Read more.
The clean energy transition has elevated renewable hydrogen as a key energy vector, yet challenges in cost-competitiveness and infrastructure planning persist. This study conducts a PRISMA-based systematic review of recent geospatial applications across the hydrogen value chain—production, storage, transport, and end-use. Bibliometric analysis reveals a strong focus on production (48%), with less attention to storage (12%) and end-uses (18%). Geographic Information Systems (GIS) dominate (80%), primarily for siting, potential assessment, and infrastructure planning, while other techniques such as geophysics and real-time monitoring are emerging. Identified research gaps include fragmented and low-resolution data, lack of harmonization, and high computational demands, which are independent from the phase in the hydrogen value chain. Promising areas for future research include hydrological resource mapping for electrolysis, offshore infrastructure clustering, and spatialized levelized cost modeling. The review concludes with a call for high-resolution, AI-enabled geospatial frameworks to support automated, location-specific decision-making and scalable renewable hydrogen deployment. Full article
Show Figures

Figure 1

17 pages, 1306 KiB  
Article
Rapid Salmonella Serovar Classification Using AI-Enabled Hyperspectral Microscopy with Enhanced Data Preprocessing and Multimodal Fusion
by MeiLi Papa, Siddhartha Bhattacharya, Bosoon Park and Jiyoon Yi
Foods 2025, 14(15), 2737; https://doi.org/10.3390/foods14152737 - 5 Aug 2025
Viewed by 279
Abstract
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) [...] Read more.
Salmonella serovar identification typically requires multiple enrichment steps using selective media, consuming considerable time and resources. This study presents a rapid, culture-independent method leveraging artificial intelligence (AI) to classify Salmonella serovars from rich hyperspectral microscopy data. Five serovars (Enteritidis, Infantis, Kentucky, Johannesburg, 4,[5],12:i:-) were analyzed from samples prepared using only sterilized de-ionized water. Hyperspectral data cubes were collected to generate single-cell spectra and RGB composite images representing the full microscopy field. Data analysis involved two parallel branches followed by multimodal fusion. The spectral branch compared manual feature selection with data-driven feature extraction via principal component analysis (PCA), followed by classification using conventional machine learning models (i.e., k-nearest neighbors, support vector machine, random forest, and multilayer perceptron). The image branch employed a convolutional neural network (CNN) to extract spatial features directly from images without predefined morphological descriptors. Using PCA-derived spectral features, the highest performing machine learning model achieved 81.1% accuracy, outperforming manual feature selection. CNN-based classification using image features alone yielded lower accuracy (57.3%) in this serovar-level discrimination. In contrast, a multimodal fusion model combining spectral and image features improved accuracy to 82.4% on the unseen test set while reducing overfitting on the train set. This study demonstrates that AI-enabled hyperspectral microscopy with multimodal fusion can streamline Salmonella serovar identification workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

17 pages, 1707 KiB  
Article
A Structural Causal Model Ontology Approach for Knowledge Discovery in Educational Admission Databases
by Bern Igoche Igoche, Olumuyiwa Matthew and Daniel Olabanji
Knowledge 2025, 5(3), 15; https://doi.org/10.3390/knowledge5030015 - 4 Aug 2025
Viewed by 335
Abstract
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from [...] Read more.
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from an admission database. Using a dataset of 12,043 records from Benue State Polytechnic, Nigeria, we demonstrate this approach as a proof of concept by constructing a domain-specific SCM ontology, validate it using conditional independence testing (CIT), and extract features for predictive modeling. Five classifiers, Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were evaluated using stratified 10-fold cross-validation. SVM and KNN achieved the highest classification accuracy (92%), with precision and recall scores exceeding 95% and 100%, respectively. Feature importance analysis revealed ‘mode of entry’ and ‘current qualification’ as key causal factors influencing admission decisions. This framework provides a reproducible pipeline that combines semantic representation and empirical validation, offering actionable insights for institutional decision-makers. Comparative benchmarking, ethical considerations, and model calibration are integrated to enhance methodological transparency. Limitations, including reliance on single-institution data, are acknowledged, and directions for generalizability and explainable AI are proposed. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

13 pages, 4134 KiB  
Communication
An Improved Agrobacterium-Mediated Transformation Method for an Important Fresh Fruit: Kiwifruit (Actinidia deliciosa)
by Chun-Lan Piao, Mengdou Ding, Yongbin Gao, Tao Song, Ying Zhu and Min-Long Cui
Plants 2025, 14(15), 2353; https://doi.org/10.3390/plants14152353 - 31 Jul 2025
Viewed by 373
Abstract
Genetic transformation is an essential tool for investigating gene function and editing genomes. Kiwifruit, recognized as a significant global fresh fruit crop, holds considerable economic and nutritional importance. However, current genetic transformation techniques for kiwifruit are impeded by low efficiency, lengthy culture durations [...] Read more.
Genetic transformation is an essential tool for investigating gene function and editing genomes. Kiwifruit, recognized as a significant global fresh fruit crop, holds considerable economic and nutritional importance. However, current genetic transformation techniques for kiwifruit are impeded by low efficiency, lengthy culture durations (a minimum of six months), and substantial labor requirements. In this research, we established an efficient system for shoot regeneration and the stable genetic transformation of the ‘Hayward’ cultivar, utilizing leaf explants in conjunction with two strains of Agrobacterium that harbor the expression vector pBI121-35S::GFP, which contains the green fluorescent protein (GFP) gene as a visible marker within the T-DNA region. Our results show that 93.3% of leaf explants responded positively to the regeneration medium, producing multiple independent adventitious shoots around the explants within a six-week period. Furthermore, over 71% of kanamycin-resistant plantlets exhibited robust GFP expression, and the entire transformation process was completed within four months of culture. Southern blot analysis confirmed the stable integration of GFP into the genome, while RT-PCR and fluorescence microscopy validated the sustained expression of GFP in mature plants. This efficient protocol for regeneration and transformation provides a solid foundation for micropropagation and the enhancement of desirable traits in kiwifruit through overexpression and gene silencing techniques. Full article
(This article belongs to the Special Issue Plant Transformation and Genome Editing)
Show Figures

Figure 1

33 pages, 4670 KiB  
Article
Universal Prediction of CO2 Adsorption on Zeolites Using Machine Learning: A Comparative Analysis with Langmuir Isotherm Models
by Emrah Kirtil
ChemEngineering 2025, 9(4), 80; https://doi.org/10.3390/chemengineering9040080 - 28 Jul 2025
Viewed by 312
Abstract
The global atmospheric concentration of carbon dioxide (CO2) has exceeded 420 ppm. Adsorption-based carbon capture technologies, offer energy-efficient, sustainable solutions. Relying on classical adsorption models like Langmuir to predict CO2 uptake presents limitations due to the need for case-specific parameter [...] Read more.
The global atmospheric concentration of carbon dioxide (CO2) has exceeded 420 ppm. Adsorption-based carbon capture technologies, offer energy-efficient, sustainable solutions. Relying on classical adsorption models like Langmuir to predict CO2 uptake presents limitations due to the need for case-specific parameter fitting. To address this, the present study introduces a universal machine learning (ML) framework using multiple algorithms—Generalized Linear Model (GLM), Feed-forward Multilayer Perceptron (DL), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Gradient Boosted Trees (GBT)—to reliably predict CO2 adsorption capacities across diverse zeolite structures and conditions. By compiling over 5700 experimentally measured adsorption data points from 71 independent studies, this approach systematically incorporates critical factors including pore size, Si/Al ratio, cation type, temperature, and pressure. Rigorous Cross-Validation confirmed superior performance of the GBT model (R2 = 0.936, RMSE = 0.806 mmol/g), outperforming other ML models and providing comparable performance with classical Langmuir model predictions without separate parameter calibration. Feature importance analysis identified pressure, Si/Al ratio, and cation type as dominant influences on adsorption performance. Overall, this ML-driven methodology demonstrates substantial promise for accelerating material discovery, optimization, and practical deployment of zeolite-based CO2 capture technologies. Full article
Show Figures

Figure 1

20 pages, 1461 KiB  
Article
Vulnerability-Based Economic Loss Rate Assessment of a Frame Structure Under Stochastic Sequence Ground Motions
by Zheng Zhang, Yunmu Jiang and Zixin Liu
Buildings 2025, 15(15), 2584; https://doi.org/10.3390/buildings15152584 - 22 Jul 2025
Viewed by 264
Abstract
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear [...] Read more.
Modeling mainshock–aftershock ground motions is essential for seismic risk assessment, especially in regions experiencing frequent earthquakes. Recent studies have often employed Copula-based joint distributions or machine learning techniques to simulate the statistical dependency between mainshock and aftershock parameters. While effective at capturing nonlinear correlations, these methods are typically black box in nature, data-dependent, and difficult to generalize across tectonic settings. More importantly, they tend to focus solely on marginal or joint parameter correlations, which implicitly treat mainshocks and aftershocks as independent stochastic processes, thereby overlooking their inherent spectral interaction. To address these limitations, this study proposes an explicit and parameterized modeling framework based on the evolutionary power spectral density (EPSD) of random ground motions. Using the magnitude difference between a mainshock and an aftershock as the control variable, we derive attenuation relationships for the amplitude, frequency content, and duration. A coherence function model is further developed from real seismic records, treating the mainshock–aftershock pair as a vector-valued stochastic process and thus enabling a more accurate representation of their spectral dependence. Coherence analysis shows that the function remains relatively stable between 0.3 and 0.6 across the 0–30 Rad/s frequency range. Validation results indicate that the simulated response spectra align closely with recorded spectra, achieving R2 values exceeding 0.90 and 0.91. To demonstrate the model’s applicability, a case study is conducted on a representative frame structure to evaluate seismic vulnerability and economic loss. As the mainshock PGA increases from 0.2 g to 1.2 g, the structure progresses from slight damage to complete collapse, with loss rates saturating near 1.0 g. These findings underscore the engineering importance of incorporating mainshock–aftershock spectral interaction in seismic damage and risk modeling, offering a transparent and transferable tool for future seismic resilience assessments. Full article
(This article belongs to the Special Issue Structural Vibration Analysis and Control in Civil Engineering)
Show Figures

Figure 1

59 pages, 11250 KiB  
Article
Automated Analysis of Vertebral Body Surface Roughness for Adult Age Estimation: Ellipse Fitting and Machine-Learning Approach
by Erhan Kartal and Yasin Etli
Diagnostics 2025, 15(14), 1794; https://doi.org/10.3390/diagnostics15141794 - 16 Jul 2025
Viewed by 345
Abstract
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT [...] Read more.
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT images. Methods: CT scans of 176 adults (94 males, 82 females; 21–94 years) were retrospectively analyzed. For each vertebra, the mean orthogonal deviation of the anterior superior endplate from an ideal ellipse was extracted. Sex-specific multiple linear regression served as a baseline; support vector regression (SVR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve-Bayes pseudo-regressor (GNB-R) were tuned with 10-fold cross-validation and evaluated on a 20% hold-out set. Performance was quantified with the standard error of the estimate (SEE). Results: DS values correlated moderately to strongly with age (peak r = 0.60 at L3–L5). Linear regression explained 40% (males) and 47% (females) of age variance (SEE ≈ 11–12 years). Non-parametric learners improved precision: RF achieved an SEE of 8.49 years in males (R2 = 0.47), whereas k-NN attained 10.8 years (R2 = 0.45) in women. Conclusions: Automated analysis of vertebral cortical roughness provides a transparent, observer-independent means of estimating adult age with accuracy approaching that of more complex deep learning pipelines. Streamlining image preparation and validating the approach across diverse populations are the next steps toward forensic adoption. Full article
(This article belongs to the Special Issue New Advances in Forensic Radiology and Imaging)
Show Figures

Figure 1

20 pages, 1618 KiB  
Article
The Influence of the Water–Cement Ratio on Concrete Resistivity: A Temperature and Saturation Dependent Analysis Using an Experimental and Predictive Approach
by Teuku Ferdiansyah, Romaynoor Ismy, Shaban Shahzad, Waqas Rafiq and Kashif Nadeem
CivilEng 2025, 6(3), 38; https://doi.org/10.3390/civileng6030038 - 15 Jul 2025
Viewed by 388
Abstract
Concrete resistivity is a critical parameter for assessing durability and monitoring the structural health of reinforced concrete. This study systematically evaluates the effects of the water-to-cement (w/c) ratio, saturation ratio (SR), and temperature on concrete resistivity using three different predictive models: linear regression, [...] Read more.
Concrete resistivity is a critical parameter for assessing durability and monitoring the structural health of reinforced concrete. This study systematically evaluates the effects of the water-to-cement (w/c) ratio, saturation ratio (SR), and temperature on concrete resistivity using three different predictive models: linear regression, cubic Support Vector Machine (SVM), and Gaussian Process Regression (GPR). Each model was independently trained and tested to assess its ability to capture the nonlinear relationships between these key parameters. Experimental results show that resistivity decreases significantly under increasing load due to geometrical effects. For a w/c ratio of 0.4, resistivity decreases by −12.48% at 100% SR and by −6.68% at 60% SR under 20% loading. Higher w/c ratios (0.5 and 0.6) exhibit more pronounced resistivity reductions due to increased porosity and ion mobility, with a maximum decrease of −13.68% for w/c = 0.6. Among the developed predictive models, the Matern 5/2 Gaussian process regression (GPR) model demonstrated the highest accuracy, achieving an RMSE of 5.21, R2 of 0.99, MSE of 27.19, and MAE of 3.40, significantly outperforming the other approaches. Additionally, a permutation importance analysis revealed that the saturation ratio (SR) is the most critical variable influencing resistivity, followed by the water–cement ratio, while temperature has the least impact. These findings provide valuable insights into the durability assessment and corrosion prevention of reinforced concrete, offering practical implications for the optimization of material design and structural health monitoring in civil engineering. Full article
(This article belongs to the Section Construction and Material Engineering)
Show Figures

Figure 1

24 pages, 24510 KiB  
Article
Application of Graph-Theoretic Methods Using ERP Components and Wavelet Coherence on Emotional and Cognitive EEG Data
by Sencer Melih Deniz, Ahmet Ademoglu, Adil Deniz Duru and Tamer Demiralp
Brain Sci. 2025, 15(7), 714; https://doi.org/10.3390/brainsci15070714 - 2 Jul 2025
Viewed by 676
Abstract
Background/Objectives: Emotion and cognition, two essential components of human mental processes, have traditionally been studied independently. The exploration of emotion and cognition is fundamental for gaining an understanding of human mental functioning. Despite the availability of various methods to measure and evaluate emotional [...] Read more.
Background/Objectives: Emotion and cognition, two essential components of human mental processes, have traditionally been studied independently. The exploration of emotion and cognition is fundamental for gaining an understanding of human mental functioning. Despite the availability of various methods to measure and evaluate emotional states and cognitive processes, physiological measurements are considered to be one of the most reliable methods due to their objective approach. In particular, electroencephalography (EEG) provides unique insight into emotional and cognitive activity through the analysis of event-related potentials (ERPs). In this study, we discriminated pleasant/unpleasant emotional moods and low/high cognitive states using graph-theoretic features extracted from spatio-temporal components. Methods: Emotional data were collected at the Physiology Department of Istanbul Medical Faculty at Istanbul University, whereas cognitive data were obtained from the DepositOnce repository of Technische Universität Berlin. Wavelet coherence values for the N100, N200, and P300 single-trial ERP components in the delta, theta, alpha, and beta frequency bands were investigated individually. Then, graph-theoretic analyses were performed using wavelet coherence-based connectivity maps. Global and local graph metrics such as energy efficiency, strength, transitivity, characteristic path length, and clustering coefficient were used as features for classification using support vector machines (SVMs), k-nearest neighbor(K-NN), and linear discriminant analysis (LDA). Results: The results show that both pleasant/unpleasant emotional moods and low/high cognitive states can be discriminated, with average accuracies of up to 92% and 89%, respectively. Conclusions: Graph-theoretic metrics based on wavelet coherence of ERP components in the delta band with the SVM algorithm allow for the discrimination of emotional and cognitive states with high accuracy. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

14 pages, 737 KiB  
Article
An Octant-Based Multi-Objective Optimization Approach for Lightning Warning in High-Risk Industrial Areas
by Marcos Antonio Alves, Bruno Alberto Soares Oliveira, Douglas Batista da Silva Ferreira, Ana Paula Paes dos Santos, Osmar Pinto, Fernando Pimentel Silvestrow, Daniel Calvo and Eugenio Lopes Daher
Atmosphere 2025, 16(7), 798; https://doi.org/10.3390/atmos16070798 - 30 Jun 2025
Viewed by 295
Abstract
Lightning strikes are a major hazard in tropical regions, especially in northern Brazil, where open-area industries such as mining are highly exposed. This study proposes an octant-based multi-objective optimization approach for spatial lightning alert systems, focusing on minimizing both false alarm rate (FAR) [...] Read more.
Lightning strikes are a major hazard in tropical regions, especially in northern Brazil, where open-area industries such as mining are highly exposed. This study proposes an octant-based multi-objective optimization approach for spatial lightning alert systems, focusing on minimizing both false alarm rate (FAR) and failure-to-warn (FTW). The method uses NSGA-III to optimize a configuration vector consisting of directional radii and alert thresholds, based solely on historical lightning location data. Experiments were conducted using four years of cloud-to-ground lightning data from a mining area in Pará, Brazil. Fifteen independent runs were executed, each with 96 individuals and up to 150 generations. The results showed a clear trade-off between FAR and FTW, with optimal solutions achieving up to 16% reduction in FAR and 50% reduction in FTW when compared to a quadrant-based baseline. The use of the hypervolume metric confirmed consistent convergence across runs. Sensitivity analysis revealed spatial patterns in optimal configurations, supporting the use of directional tuning. The proposed approach provides a flexible and interpretable model for risk-based alert strategies, compliant with safety regulations such as NBR 5419/2015 and NR-22. It offers a viable solution for automated alert generation in high-risk environments, especially where detailed meteorological data is unavailable. Full article
Show Figures

Figure 1

12 pages, 1086 KiB  
Article
Research on High-Precision Measurement Technology of the Extinction Ratio Based on the Transparent Element Mueller Matrix
by Ruiqi Xu, Mingpeng Hu, Xuedong Cao and Jiahui Ren
Micromachines 2025, 16(7), 781; https://doi.org/10.3390/mi16070781 - 30 Jun 2025
Viewed by 302
Abstract
With the widespread application of optical technology in numerous fields, the polarization performance of transmissive optical components has become increasingly crucial. The extinction ratio, an important indicator for evaluating their polarization characteristics, holds great significance for its precise detection. Aiming at the measurement [...] Read more.
With the widespread application of optical technology in numerous fields, the polarization performance of transmissive optical components has become increasingly crucial. The extinction ratio, an important indicator for evaluating their polarization characteristics, holds great significance for its precise detection. Aiming at the measurement of the extinction ratio of a transparent component, this study proposes a measurement method for solving the extinction ratio based on measuring the Mueller matrix of the transparent component. The purpose is to analyze the worst position of the extinction ratio of the transmissive component. The extinction ratio of the sample is obtained according to the phase retardation derived from the Stokes vector of the incident light and the Mueller matrix of the optical component, and a theoretical analysis and simulation of this method are carried out. The simulation results verify the feasibility of the theoretical derivation of this method. To further verify the accuracy of the measurement method, experimental verification is conducted. A standard transparent sample with a phase retardation of 13 nm is selected for actual measurement. The data of independent experiments on the transparent sample under different powers are analyzed, and the extinction ratio of the transparent sample is further obtained. When using this method, the relative error is less than 2%, indicating good accuracy. Full article
(This article belongs to the Special Issue Micro/Nano Optical Devices and Sensing Technology)
Show Figures

Figure 1

21 pages, 1578 KiB  
Article
ISG15 as a Potent Immune Adjuvant in MVA-Based Vaccines Against Zika Virus and SARS-CoV-2
by Juan García-Arriaza, Michela Falqui, Patricia Pérez, Rocío Coloma, Beatriz Perdiguero, Enrique Álvarez, Laura Marcos-Villar, David Astorgano, Irene Campaña-Gómez, Carlos Óscar S. Sorzano, Mariano Esteban, Carmen Elena Gómez and Susana Guerra
Vaccines 2025, 13(7), 696; https://doi.org/10.3390/vaccines13070696 - 27 Jun 2025
Viewed by 688
Abstract
Background: Vaccines represent one of the most affordable and efficient tools for controlling infectious diseases; however, the development of efficacious vaccines against complex pathogens remains a major challenge. Adjuvants play a relevant role in enhancing vaccine-induced immune responses. One such molecule is interferon-stimulated [...] Read more.
Background: Vaccines represent one of the most affordable and efficient tools for controlling infectious diseases; however, the development of efficacious vaccines against complex pathogens remains a major challenge. Adjuvants play a relevant role in enhancing vaccine-induced immune responses. One such molecule is interferon-stimulated gene 15 (ISG15), a key modulator of antiviral immunity that acts both through ISGylation-dependent mechanisms and as a cytokine-like molecule. Methods: In this study, we assessed the immunostimulatory potential of ISG15 as an adjuvant in Modified Vaccinia virus Ankara (MVA)-based vaccine candidates targeting Zika virus (ZIKV) and Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). Early innate responses and immune cell infiltration were analyzed in immunized mice by flow cytometry and cytokine profiling. To elucidate the underlying mechanism of action of ISG15, in vitro co-infection studies were performed in macrophages. Finally, we evaluated the magnitude and functional quality of the elicited antigen-specific cellular immune responses in vivo. Results: Analysis of early innate responses revealed both platform- and variant-specific effects. ISG15AA preferentially promoted natural killer (NK) cell recruitment at the injection site, whereas ISG15GG enhanced myeloid cell infiltration in draining lymph nodes (DLNs), particularly when delivered via MVA. Moreover, in vitro co-infection of macrophages with MVA-based vaccine vectors and the ISG15AA mutant led to a marked increase in proinflammatory cytokine production, highlighting a dominant role for the extracellular, ISGylation-independent functions of ISG15 in shaping vaccine-induced immunity. Notably, co-infection of ISG15 with MVA-ZIKV and MVA-SARS-CoV-2 vaccine candidates enhanced the magnitude of antigen-specific immune responses in both vaccine models. Conclusions: ISG15, particularly in its ISGylation-deficient form, acts as a promising immunomodulatory adjuvant for viral vaccines, enhancing both innate and adaptive immune responses. Consistent with previous findings in the context of Human Immunodeficiency virus type 1 (HIV-1) vaccines, this study further supports the potential of ISG15 as an effective adjuvant for vaccines targeting viral infections such as ZIKV and SARS-CoV-2. Full article
(This article belongs to the Special Issue Protective Immunity and Adjuvant Vaccines)
Show Figures

Figure 1

34 pages, 1710 KiB  
Article
Logistics Sprawl and Urban Congestion Dynamics Toward Sustainability: A Logistic Regression and Random-Forest-Based Model
by Manal El Yadari, Fouad Jawab, Imane Moufad and Jabir Arif
Sustainability 2025, 17(13), 5929; https://doi.org/10.3390/su17135929 - 27 Jun 2025
Viewed by 520
Abstract
Increasing road congestion is the main constraint that may influence the economic development of cities and urban freight transport efficiency because it generates additional costs related to delay, influences social life, increases environmental emissions, and decreases service quality. This may result from several [...] Read more.
Increasing road congestion is the main constraint that may influence the economic development of cities and urban freight transport efficiency because it generates additional costs related to delay, influences social life, increases environmental emissions, and decreases service quality. This may result from several factors, including an increase in logistics activities in the urban core. Therefore, this paper aims to define the relationship between the logistics sprawl phenomenon and congestion level. In this sense, we explored the literature to summarize the phenomenon of logistics sprawl in different cities and defined the dependent and independent variables. Congestion level was defined as the dependent variable, while the increasing distance resulting from logistics sprawl, along with city and operational flow characteristics, was treated as independent variables. We compared the performance of several models, including decision tree, support vector machine, gradient boosting, k-nearest neighbor, logistic regression and random forest. Among all the models tested, we found that the random forest algorithm delivered the best performance in terms of prediction. We combined both logistic regression—for its interpretability—and random forest—for its predictive strength—to define, explain, and interpret the relationship between the studied variables. Subsequently, we collected data from the literature and various databases, including transit city sources. The resulting dataset, composed of secondary and open-source data, was then enhanced through standard augmentation techniques—SMOTE, mixup, Gaussian noise, and linear interpolation—to improve class balance and data quality and ensure the robustness of the analysis. Then, we developed a Python code and executed it in Colab. As a result, we deduced an equation that describes the relationship between the congestion level and the defined independent variables. Full article
(This article belongs to the Special Issue Sustainable Operations and Green Supply Chain)
Show Figures

Figure 1

Back to TopTop