Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (537)

Search Parameters:
Keywords = multiple source domains

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 10663 KB  
Article
Feature Decomposition-Based Framework for Source-Free Universal Domain Adaptation in Mechanical Equipment Fault Diagnosis
by Peiyi Zhou, Weige Liang, Shiyan Sun and Qizheng Zhou
Mathematics 2025, 13(20), 3338; https://doi.org/10.3390/math13203338 - 20 Oct 2025
Abstract
Aiming at the problems of high complexity in source domain data, inaccessibility of target domain data, and unknown fault patterns in real-world industrial scenarios for mechanical fault diagnosis, this paper proposes a Feature Decomposition-based Source-Free Universal Domain Adaptation (FD-SFUniDA) framework for mechanical equipment [...] Read more.
Aiming at the problems of high complexity in source domain data, inaccessibility of target domain data, and unknown fault patterns in real-world industrial scenarios for mechanical fault diagnosis, this paper proposes a Feature Decomposition-based Source-Free Universal Domain Adaptation (FD-SFUniDA) framework for mechanical equipment fault diagnosis. First, the CBAM attention module is incorporated to enhance the ResNet-50 convolutional network for extracting feature information from source domain data. During the target domain adaptation phase, singular value decomposition is applied to the weights of the pre-trained model’s classification layer, orthogonally decoupling the feature space into a source-known subspace and a target-private subspace. Then, based on the magnitude of feature projections, a dynamic decision boundary is constructed and combined with an entropy threshold mechanism to accurately distinguish between known and unknown class samples. Furthermore, intra-class feature consistency is strengthened through neighborhood-expanded contrastive learning, and semantic weight calibration is employed to reconstruct the feature space, thereby suppressing the negative transfer effect. Finally, extensive experiments under multiple operating conditions on rolling bearing and reciprocating mechanism datasets demonstrate that the proposed method excels in addressing source-free fault diagnosis problems for mechanical equipment and shows promising potential for practical engineering applications in fault classification tasks. Full article
Show Figures

Figure 1

22 pages, 2027 KB  
Article
Agri-DSSA: A Dual Self-Supervised Attention Framework for Multisource Crop Health Analysis Using Hyperspectral and Image-Based Benchmarks
by Fatema A. Albalooshi
AgriEngineering 2025, 7(10), 350; https://doi.org/10.3390/agriengineering7100350 - 17 Oct 2025
Viewed by 108
Abstract
Recent advances in hyperspectral imaging (HSI) and multimodal deep learning have opened new opportunities for crop health analysis; however, most existing models remain limited by dataset scope, lack of interpretability, and weak cross-domain generalization. To overcome these limitations, this study introduces Agri-DSSA, a [...] Read more.
Recent advances in hyperspectral imaging (HSI) and multimodal deep learning have opened new opportunities for crop health analysis; however, most existing models remain limited by dataset scope, lack of interpretability, and weak cross-domain generalization. To overcome these limitations, this study introduces Agri-DSSA, a novel Dual Self-Supervised Attention (DSSA) framework that simultaneously models spectral and spatial dependencies through two complementary self-attention branches. The proposed architecture enables robust and interpretable feature learning across heterogeneous data sources, facilitating the estimation of spectral proxies of chlorophyll content, plant vigor, and disease stress indicators rather than direct physiological measurements. Experiments were performed on seven publicly available benchmark datasets encompassing diverse spectral and visual domains: three hyperspectral datasets (Indian Pines with 16 classes and 10,366 labeled samples; Pavia University with 9 classes and 42,776 samples; and Kennedy Space Center with 13 classes and 5211 samples), two plant disease datasets (PlantVillage with 54,000 labeled leaf images covering 38 diseases across 14 crop species, and the New Plant Diseases dataset with over 30,000 field images captured under natural conditions), and two chlorophyll content datasets (the Global Leaf Chlorophyll Content Dataset (GLCC), derived from MERIS and OLCI satellite data between 2003–2020, and the Leaf Chlorophyll Content Dataset for Crops, which includes paired spectrophotometric and multispectral measurements collected from multiple crop species). To ensure statistical rigor and spatial independence, a block-based spatial cross-validation scheme was employed across five independent runs with fixed random seeds. Model performance was evaluated using R2, RMSE, F1-score, AUC-ROC, and AUC-PR, each reported as mean ± standard deviation with 95% confidence intervals. Results show that Agri-DSSA consistently outperforms baseline models (PLSR, RF, 3D-CNN, and HybridSN), achieving up to R2=0.86 for chlorophyll content estimation and F1-scores above 0.95 for plant disease detection. The attention distributions highlight physiologically meaningful spectral regions (550–710 nm) associated with chlorophyll absorption, confirming the interpretability of the model’s learned representations. This study serves as a methodological foundation for UAV-based and field-deployable crop monitoring systems. By unifying hyperspectral, chlorophyll, and visual disease datasets, Agri-DSSA provides an interpretable and generalizable framework for proxy-based vegetation stress estimation. Future work will extend the model to real UAV campaigns and in-field spectrophotometric validation to achieve full agronomic reliability. Full article
Show Figures

Figure 1

18 pages, 2307 KB  
Article
Can We Trust AI Content Detection Tools for Critical Decision-Making?
by Tadesse G. Wakjira, Ibrahim A. Tijani, M. Shahria Alam, Mustafa Mashal and Mohammad Khalad Hasan
Information 2025, 16(10), 904; https://doi.org/10.3390/info16100904 - 16 Oct 2025
Viewed by 529
Abstract
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate [...] Read more.
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate screening by hiring agencies in government and private sectors. This extensive reliance raises serious questions about their reliability, fairness, and appropriateness for high-stakes applications. This study evaluates the performance of six widely used AI content detection tools, namely Undetectable AI, Zerogpt.com, Zerogpt.net, Brandwell.ai, Gowinston.ai, and Crossplag, referred to as Tools A through F in this study. The assessment focused on the ability of the tools to identify human versus AI-generated content across multiple domains. Verified human-authored texts were gathered from reputable sources, including university websites, pre-ChatGPT publications in Nature and Science, government portals, and media outlets (e.g., BBC, US News). Complementary datasets of AI-generated texts were produced using ChatGPT-4o, encompassing coherent essays, nonsensical passages, and hybrid texts with grammatical errors, to test tool robustness. The results reveal significant performance limitations. The accuracy ranged from 14.3% (Tool B) to 71.4% (Tool D), with the precision and recall metrics showing inconsistent detection capabilities. The tools were also highly sensitive to minor textual modifications, where slight changes in phrasing could flip classifications between “AI-generated” and “human-authored.” Overall, the current AI detection tools lack the robustness and reliability needed for enforcing academic integrity or making employment-related decisions. The findings highlight an urgent need for more transparent, accurate, and context-aware frameworks before these tools can be responsibly incorporated into critical institutional or societal processes. Full article
Show Figures

Figure 1

16 pages, 2334 KB  
Article
A Comprehensive Image Quality Evaluation of Image Fusion Techniques Using X-Ray Images for Detonator Detection Tasks
by Lynda Oulhissane, Mostefa Merah, Simona Moldovanu and Luminita Moraru
Appl. Sci. 2025, 15(20), 10987; https://doi.org/10.3390/app152010987 - 13 Oct 2025
Viewed by 140
Abstract
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance [...] Read more.
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance unattended detection without requiring ground-truth labels; (2) thoroughly evaluate fusion techniques in terms of balancing image quality, information content, contrast, and the preservation of meaningful features. Methods: A total of 1000 X-ray luggage images and 150 detonator images were used for fusion experiments based on deep learning, transform-based, and feature-driven methods. The proposed approach does not need ground truth supervision. Deep learning fusion techniques, including VGG, FusionNet, and AttentionFuse, enable the dynamic selection and combination of features from multiple input images. The transform-based fusion methods convert input images into different domains using mathematical transforms to enhance fine structures. The Nonsubsampled Contourlet Transform (NSCT), Curvelet Transform, and Laplacian Pyramid (LP) are employed. Feature-driven image fusion methods combine meaningful representations for easier interpretation. Singular Value Decomposition (SVD), Principal Component Analysis (PCA), Random Forest (RF), and Local Binary Pattern (LBP) are used to capture and compare texture details across source images. Entropy (EN), Standard Deviation (SD), and Average Gradient (AG) assess factors such as spatial resolution, contrast preservation, and information retention and are used to evaluate the performance of the analysed methods. Results: The results highlight the strengths and limitations of the evaluated techniques, demonstrating their effectiveness in producing sharpened fused X-ray images with clearly emphasized targets and enhanced structural details. Conclusions: The Laplacian Pyramid fusion method emerges as the most versatile choice for applications demanding a balanced trade-off. This is evidenced by its overall multi-criteria balance, supported by a composite (geometric mean) score on normalised metrics. It consistently achieves high performance across all evaluated metrics, making it reliable for detecting concealed threats under diverse imaging conditions. Full article
Show Figures

Figure 1

17 pages, 1747 KB  
Article
Weighted Transformer Classifier for User-Agent Progression Modeling, Bot Contamination Detection, and Traffic Trust Scoring
by Geza Lucz and Bertalan Forstner
Mathematics 2025, 13(19), 3153; https://doi.org/10.3390/math13193153 - 2 Oct 2025
Viewed by 245
Abstract
In this paper, we present a unique method to determine the level of bot contamination of web-based user agents. It is common practice for bots and robotic agents to masquerade as human-like to avoid content and performance limitations. This paper continues our previous [...] Read more.
In this paper, we present a unique method to determine the level of bot contamination of web-based user agents. It is common practice for bots and robotic agents to masquerade as human-like to avoid content and performance limitations. This paper continues our previous work, using over 600 million web log entries collected from over 4000 domains to derive and generalize how the prominence of specific web browser versions progresses over time, assuming genuine human agency. Here, we introduce a parametric model capable of reproducing this progression in a tunable way. This simulation allows us to tag human-generated traffic in our data accurately. Along with the highest confidence self-tagged bot traffic, we train a Transformer-based classifier that can determine the bot contamination—a botness metric of user-agents without prior labels. Unlike traditional syntactic or rule-based filters, our model learns temporal patterns of raw and heuristic-derived features, capturing nuanced shifts in request volume, response ratios, content targeting, and entropy-based indicators over time. This rolling window-based pre-classification of traffic allows content providers to bin streams according to their bot infusion levels and direct them to several specifically tuned filtering pipelines, given the current load levels and available free resources. We also show that aggregated traffic data from multiple sources can enhance our model’s accuracy and can be further tailored to regional characteristics using localized metadata from standard web server logs. Our ability to adjust the heuristics to geographical or use case specifics makes our method robust and flexible. Our evaluation highlights that 65% of unclassified traffic is bot-based, underscoring the urgency of robust detection systems. We also propose practical methods for independent or third-party verification and further classification by abusiveness. Full article
Show Figures

Figure 1

38 pages, 431 KB  
Systematic Review
Electronic Systems in Competitive Motorcycles: A Systematic Review Following PRISMA Guidelines
by Andrei García Cuadra, Alberto Brunete González and Francisco Santos Olalla
Electronics 2025, 14(19), 3926; https://doi.org/10.3390/electronics14193926 - 2 Oct 2025
Viewed by 339
Abstract
Objectives: To systematically review and analyze electronic systems in competitive motorcycles (2020–2025), examining their technical specifications, performance impacts, and technological evolution across MotoGP, World Superbike (WSBK), MotoE, British Superbike (BSB), and Spanish Championship (ESBK) categories. Eligibility criteria: Included studies reporting technical specifications or [...] Read more.
Objectives: To systematically review and analyze electronic systems in competitive motorcycles (2020–2025), examining their technical specifications, performance impacts, and technological evolution across MotoGP, World Superbike (WSBK), MotoE, British Superbike (BSB), and Spanish Championship (ESBK) categories. Eligibility criteria: Included studies reporting technical specifications or performance data of electronic systems in professional motorcycle racing, published between January 2020 and December 2025 in English, Spanish, Italian, or Japanese. Excluded: opinion pieces, amateur racing, and studies without quantitative data. Information sources: IEEE Xplore, SAE Technical Papers, Web of Science, Scopus, and specialized motorsport databases were searched through 15 December 2025. Risk of bias: Modified Cochrane Risk of Bias tool for experimental studies and Newcastle-Ottawa Scale for observational studies. Synthesis of results: Synthesis of results: Random-effects meta-analysis using DerSimonian-Laird method for homogeneous outcomes; narrative synthesis for heterogeneous data. Included studies: 87 studies met inclusion criteria (52 experimental, 38 simulation, 23 technical descriptions, 14 comparative analyses). Electronic systems were categorized into six domains: Engine Control Units (ECU, 28 studies, 22%), Vehicle Dynamics (23 studies, 18%), Traction Control (19 studies, 15%), Data Acquisition (21 studies, 17%), Braking Systems (18 studies, 14%), and Emerging Technologies (18 studies, 14%). Note that studies could address multiple domains. Limitations of evidence: Proprietary restrictions limited access to 31% of technical details; 43% lacked cross-category comparisons. Interpretation: Electronic systems are primary performance differentiators, with computational power following Moore’s Law. Future developments point toward distributed architectures and 5G telemetry. Full article
Show Figures

Figure 1

15 pages, 1392 KB  
Article
Optimal Source Selection for Distributed Bearing Fault Classification Using Wavelet Transform and Machine Learning Algorithms
by Ramin Rajabioun and Özkan Atan
Appl. Sci. 2025, 15(19), 10631; https://doi.org/10.3390/app151910631 - 1 Oct 2025
Viewed by 269
Abstract
Early and accurate detection of distributed bearing faults is essential to prevent equipment failures and reduce downtime in industrial environments. This study explores the optimal selection of input signal sources for high-accuracy distributed fault classification, employing wavelet transform and machine learning algorithms. The [...] Read more.
Early and accurate detection of distributed bearing faults is essential to prevent equipment failures and reduce downtime in industrial environments. This study explores the optimal selection of input signal sources for high-accuracy distributed fault classification, employing wavelet transform and machine learning algorithms. The primary contribution of this work is to demonstrate that robust distributed bearing fault diagnosis can be achieved through optimal sensor fusion and wavelet-based feature engineering, without the need for deep learning or high-dimensional inputs. This approach provides interpretable, computationally efficient, and generalizable fault classification, setting it apart from most existing studies that rely on larger models or more extensive data. All experiments were conducted in a controlled laboratory environment across multiple loads and speeds. A comprehensive dataset, including three-axis vibration, stray magnetic flux, and two-phase current signals, was used to diagnose six distinct bearing fault conditions. The wavelet transform is applied to extract frequency-domain features, capturing intricate fault signatures. To identify the most effective input signal combinations, we systematically evaluated Random Forest, XGBoost, and Support Vector Machine (SVM) models. The analysis reveals that specific signal pairs significantly enhance classification accuracy. Notably, combining vibration signals with stray magnetic flux consistently achieved the highest performance across models, with Random Forest reaching perfect test accuracy (100%) and SVM showing robust results. These findings underscore the importance of optimal source selection and wavelet-transformed features for improving machine learning model performance in bearing fault classification tasks. While the results are promising, validation in real-world industrial settings is needed to fully assess the method’s practical reliability and impact on predictive maintenance systems. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

14 pages, 254 KB  
Review
Hypoxia and Cognitive Functions in Patients Suffering from Cardiac Diseases: A Narrative Review
by Dominika Grzybowska-Ganszczyk, Zbigniew Nowak, Józef Alfons Opara and Agata Nowak-Lis
J. Clin. Med. 2025, 14(19), 6750; https://doi.org/10.3390/jcm14196750 - 24 Sep 2025
Viewed by 458
Abstract
Background: Cardiovascular diseases (CVD) are major contributors to global morbidity and mortality, and their association with cognitive impairment has gained increasing attention. Recent studies indicate that the prevalence of post-myocardial infarction (MI) cognitive impairment ranges from 22% to 37%, with attention being [...] Read more.
Background: Cardiovascular diseases (CVD) are major contributors to global morbidity and mortality, and their association with cognitive impairment has gained increasing attention. Recent studies indicate that the prevalence of post-myocardial infarction (MI) cognitive impairment ranges from 22% to 37%, with attention being one of the most frequently affected domains. Moreover, novel approaches, such as normobaric hypoxic training in cardiac rehabilitation, show potential in improving both cardiovascular and cognitive outcomes. Aim: This narrative review aims to synthesize current evidence on the role of hypoxia in the development of cognitive dysfunction among patients with cardiac diseases, emphasizing shared mechanisms along the heart–brain axis. Methods: We performed a narrative search of PubMed, Scopus, and Web of Science databases using the keywords “hypoxia”, “cognitive impairment”, “myocardial infarction”, “heart failure”, and “CABG surgery”. We included original studies, reviews, and meta-analyses published between 2000 and up to the present in English. Priority was given to peer-reviewed human studies; animal models were included when providing mechanistic insights. Exclusion criteria included case reports, conference abstracts, and non-peer-reviewed sources. Narrative reviews, while useful for providing a broad synthesis, carry an inherent risk of selective bias. To minimize this limitation, independent screening of sources and discussions among multiple authors were conducted to ensure balanced inclusion of the most relevant and high-quality evidence. Results: Hypoxia contributes to cognitive decline through multiple pathophysiological pathways, including blood–brain barrier disruption, white matter degeneration, oxidative stress, and chronic neuroinflammation. The concept of “cardiogenic dementia”, although not yet formally classified, highlights cardiac-related contributions to cognitive impairment beyond classical vascular dementia. Clinical assessment tools such as the Stroop test, Trail Making Test (TMT), and Montreal Cognitive Assessment (MoCA) are useful in detecting subtle executive dysfunctions. Both pharmacological treatments (ACE inhibitors, ARBs) and innovative rehabilitation methods (including normobaric hypoxic training) may improve outcomes. Conclusions: Cognitive impairment in cardiac patients is common, clinically relevant, and often underdiagnosed. Routine cognitive screening after cardiac events and integration of cognitive rehabilitation into standard cardiology care are recommended. Future studies should incorporate cognitive endpoints into cardiovascular trials. Full article
(This article belongs to the Section Cardiology)
26 pages, 12387 KB  
Article
Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model
by Xirui Xu, Ke Nie, Sanling Yuan, Wei Fan, Yanan Lu and Fei Wang
Fishes 2025, 10(10), 477; https://doi.org/10.3390/fishes10100477 - 24 Sep 2025
Viewed by 378
Abstract
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model [...] Read more.
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model (SAM) and multi-source high-resolution remote sensing image data, is proposed for high-precision aquaculture facility extraction and overcomes the problems of low efficiency and limited accuracy in traditional manual inspection methods. The research method includes systematic optimization of SAM segmentation parameters for different data sources and strict evaluation of model performance at multiple spatial resolutions. Additionally, the impact of different spectral band combinations on the segmentation effect is systematically analyzed. Experimental results demonstrate a significant correlation between resolution and accuracy, with UAV-derived imagery achieving exceptional segmentation accuracy (97.71%), followed by Jilin-1 (91.64%) and Sentinel-2 (72.93%) data. Notably, the NIR-Blue-Red band combination exhibited superior performance in delineating aquaculture infrastructure, suggesting its optimal utility for such applications. A robust and scalable solution for automatically extracting facilities is established, which offers significant insights for extending SAM’s capabilities to broader remote sensing applications within marine resource assessment domains. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Graphical abstract

51 pages, 2704 KB  
Review
Use and Potential of AI in Assisting Surveyors in Building Retrofit and Demolition—A Scoping Review
by Yuan Yin, Haoyu Zuo, Tom Jennings, Sandeep Jain, Ben Cartwright, Julian Buhagiar, Paul Williams, Katherine Adams, Kamyar Hazeri and Peter Childs
Buildings 2025, 15(19), 3448; https://doi.org/10.3390/buildings15193448 - 24 Sep 2025
Viewed by 551
Abstract
Background: Pre-retrofit auditing and pre-demolition auditing (PRA/PDA) are important in material reuse, waste reduction, and regulatory compliance in the building sector. An emphasis on sustainable construction practices has led to a higher requirement for PRA/PDA. However, traditional auditing processes demand substantial time [...] Read more.
Background: Pre-retrofit auditing and pre-demolition auditing (PRA/PDA) are important in material reuse, waste reduction, and regulatory compliance in the building sector. An emphasis on sustainable construction practices has led to a higher requirement for PRA/PDA. However, traditional auditing processes demand substantial time and manual effort and are more easily to create human errors. As a developing technology, artificial intelligence (AI) can potentially assist PRA/PDA processes. Objectives: This scoping review aims to review the potential of AI in assisting each sub-stage of PRA/PDA processes. Eligibility Criteria and Sources of Evidence: Included sources were English-language articles, books, and conference papers published before 31 March 2025, available electronically, and focused on AI applications in PRA/PDA or related sub-processes involving structured elements of buildings. Databases searched included ScienceDirect, IEEE Xplorer, Google Scholar, Scopus, Elsevier, and Springer. Results: The review indicates that although AI has the potential to be applied across multiple PRA/PDA sub-stages, actual application is still limited. AI integration has been most prevalent in floor plan recognition and material detection, where deep learning and computer vision models achieved notable accuracies. However, other sub-stages—such as operation and maintenance document analysis, object detection, volume estimation, and automated report generation—remain underexplored, with no PRA/PDA specific AI models identified. These gaps highlight the uneven distribution of AI adoption, with performance varying greatly depending on data quality, available domain-specific datasets, and the complexity of integration into existing workflows. Conclusions: Out of multiple PRA/PDA sub-stages, AI integration was focused on floor plan recognition and material detection, with deep learning and computer vision models achieving over 90% accuracy. Other stages such as operation and maintenance document analysis, object detection, volume estimation, and report writing, had little to no dedicated AI research. Therefore, although AI demonstrates strong potential in PRA/PDA, particularly for floor plan and material analysis, broader adoption is limited. Future research should target multimodal AI development, real-time deployment, and standardized benchmarking to improve automation and accuracy across all PRA/PDA stages. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

21 pages, 349 KB  
Article
Accidents in the Production, Transport, and Handling of Explosives: TOL Method Hazard Analysis with a Mining Case Study
by Dagmara Nowak-Senderowska and Józef Pyra
Appl. Sci. 2025, 15(18), 10150; https://doi.org/10.3390/app151810150 - 17 Sep 2025
Viewed by 659
Abstract
Explosives (EXP) are an essential component of technological processes across numerous civil industry sectors, particularly in surface mining. Despite their technological benefits, their use is associated with a high risk of serious accidents. This study aimed to present available data sources on explosive-related [...] Read more.
Explosives (EXP) are an essential component of technological processes across numerous civil industry sectors, particularly in surface mining. Despite their technological benefits, their use is associated with a high risk of serious accidents. This study aimed to present available data sources on explosive-related incidents and to highlight the limitations in their accessibility, quality, and comparability. The analysis included the SAFEX, eMARS, and PAR databases, as well as national reports from the Polish State Mining Authority, focusing on discrepancies in the classification and description of events. The review was complemented by an analysis of an accident in a Polish open-pit mine, in which an excavator operator was injured due to the uncontrolled detonation of an unexploded charge. The TOL method was employed to analyze the root causes, allowing for the identification of technical, organizational, and human contributing factors, with specific adaptations for the explosives domain such as safety barrier verification, post-blast supervision, and quality control of detonators. The results indicate that most incidents arise from the interaction of multiple causes rather than a single error. The study underscores the need for more effective verification procedures, improved oversight of post-blast operations, and enhanced protective equipment. The article highlights the importance of a systems-based approach to safety management, encompassing both consistent incident data analysis and practical preventive actions throughout the entire life cycle of explosives. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

10 pages, 483 KB  
Article
Retinal and Choroidal Morphological Features Influencing Contrast Sensitivity in Retinitis Pigmentosa
by Francisco de Asís Bartol-Puyal, Beatriz Cordón Ciordia, Elisa Viladés Palomar, Carlos Santana Plata, Silvia Méndez-Martínez and Luis Pablo Júlvez
Medicina 2025, 61(9), 1681; https://doi.org/10.3390/medicina61091681 - 17 Sep 2025
Viewed by 379
Abstract
Background and Objectives: To find morphological features on optical coherence tomography (OCT) and OCT-angiography (OCTA) influencing contrast sensitivity (CS) in patients with retinitis pigmentosa (RP). Materials and Methods: Cross-sectional study enrolling 18 eyes of 18 patients with RP. They were examined [...] Read more.
Background and Objectives: To find morphological features on optical coherence tomography (OCT) and OCT-angiography (OCTA) influencing contrast sensitivity (CS) in patients with retinitis pigmentosa (RP). Materials and Methods: Cross-sectional study enrolling 18 eyes of 18 patients with RP. They were examined with CSV1000-E (VectorVision) under mesopic conditions (logarithmic scale), spectral-domain OCT (SD-OCT, Spectralis), swept-source OCT (SS-OCT, Triton), and OCTA (Triton). Automatic thickness measurements of every retinal layer were obtained in grids of 8 × 8 and 10 × 10 cubes. Foveal avascular zone and vascular densities (VD) were also analyzed. Statistical analysis included multiple lineal regression analyses, and a correlation analysis between age, axial length, and intraocular pressure, and retinal nerve fiber layer (RNFL) thickness. Results: Mean age was 47.34 ± 13.77 years. Mean CS with 3, 6, 12, and 18 cycles/degree (c/d) was 1.48 ± 0.37, 1.51 ± 0.39, 1.00 ± 0.42, and 0.44 ± 0.39, respectively. The most related variables to 3 c/d frequency were nasal RFNL thickness (R2 = 0.54) and central outer plexiform layer (OPL) (R2 = 0.33). In case of 6 c/d frequency, it was central VD in deep plexus (R2 = 0.66), and retinal pigment epithelium (RPE) (R2 = 0.22). As for 12 c/d frequency, it was central RNFL (R2 = 0.50), and central VD in deep plexus (R2 = 0.26). Regarding 18 c/d frequency, it was central RNFL (R2 = 0.70). Conclusions: Central and nasal RNFL thickness seem to be main predictors of CS in patients with RP, as well as VD in deep retinal plexus. Others with limited influence might be central and nasal OPL thickness, and central RPE thickness. Full article
(This article belongs to the Special Issue Advances in Diagnosis and Therapies of Ocular Diseases)
Show Figures

Figure 1

22 pages, 2890 KB  
Article
Multi-Target Adversarial Learning for Partial Fault Detection Applied to Electric Motor-Driven Systems
by Francisco Arellano Espitia, Miguel Delgado-Prieto, Joan Valls Pérez and Juan Jose Saucedo-Dorantes
Appl. Sci. 2025, 15(18), 10091; https://doi.org/10.3390/app151810091 - 15 Sep 2025
Viewed by 544
Abstract
Deep neural network-based fault diagnosis is gaining significant attention within the Industry 4.0 framework, yet practical deployment is still hindered by domain shift, partial label mismatch, and class imbalance. In this regard, this paper proposes a Multi-Target Adversarial Learning for Partial Fault Diagnosis [...] Read more.
Deep neural network-based fault diagnosis is gaining significant attention within the Industry 4.0 framework, yet practical deployment is still hindered by domain shift, partial label mismatch, and class imbalance. In this regard, this paper proposes a Multi-Target Adversarial Learning for Partial Fault Diagnosis (MTAL-PFD), an extension of adversarial and discrepancy-based domain adaptation tailored to single-source, multi-target (1SmT) partial fault diagnosis in electric motor-driven systems. The framework transfers knowledge from a labeled source to multiple unlabeled target domains by combining dual 1D-CNN feature extractors with adversarial domain discriminators, an inconsistency-based regularizer to stabilize learning, and class-aware weighting to mitigate partial label shift by down-weighting outlier source classes. Thus, the proposed scheme combines a multi-objective approach with partial domain adaptation applied to the diagnosis of electric motor-driven systems. The proposed model is evaluated across 24 cross-domain tasks and varying operating conditions on two motor test benches, showing consistent improvements over representative baselines. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

16 pages, 3123 KB  
Article
Numerical Modeling of Tissue Irradiation in Cylindrical Coordinates Using the Fuzzy Finite Pointset Method
by Anna Korczak
Appl. Sci. 2025, 15(18), 9923; https://doi.org/10.3390/app15189923 - 10 Sep 2025
Viewed by 300
Abstract
This study focuses on the numerical analysis of heat transfer in biological tissue. The proposed model is formulated using the Pennes equation for a two-dimensional cylindrical domain. The tissue undergoes laser irradiation, where internal heat sources are determined based on the Beer–Lambert law. [...] Read more.
This study focuses on the numerical analysis of heat transfer in biological tissue. The proposed model is formulated using the Pennes equation for a two-dimensional cylindrical domain. The tissue undergoes laser irradiation, where internal heat sources are determined based on the Beer–Lambert law. Moreover, key parameters—such as the perfusion rate and effective scattering coefficient—are modeled as functions dependent on tissue damage. In addition, a fuzzy heat source associated with magnetic nanoparticles is also incorporated into the model to account for magnetothermal effects. A novel aspect of this work is the introduction of uncertainty in selected model parameters by representing them as triangular fuzzy numbers. Consequently, the entire Finite Pointset Method (FPM) framework is extended to operate with fuzzy-valued quantities, which—to the best of our knowledge—has not been previously applied in two-dimensional thermal modeling of biological tissues. The numerical computations are carried out using the fuzzy-adapted FPM approach. All calculations are performed due to the fuzzy arithmetic rules with the application of α-cuts. This fuzzy formulation inherently captures the variability of uncertain parameters, effectively replacing the need for a traditional sensitivity analysis. As a result, the need for multiple simulations over a wide range of input values is eliminated. The findings, discussed in the final Section, demonstrate that this extended FPM formulation is a viable and effective tool for analyzing heat transfer processes under uncertainty, with an evaluation of α-cut widths and the influence of the degree of fuzziness on the results also carried out. Full article
Show Figures

Figure 1

20 pages, 1325 KB  
Article
Intelligent Fault Diagnosis for Cross-Domain Few-Shot Learning of Rotating Equipment Based on Mixup Data Augmentation
by Kun Yu, Yan Li, Qiran Zhan, Yongchao Zhang and Bin Xing
Machines 2025, 13(9), 807; https://doi.org/10.3390/machines13090807 - 3 Sep 2025
Viewed by 686
Abstract
Existing fault diagnosis methods assume the identical distribution of training and test data, failing to adapt to source–target domain differences in industrial scenarios and limiting generalization. They also struggle to explore inter-domain correlations with scarce labeled target samples, leading to poor convergence and [...] Read more.
Existing fault diagnosis methods assume the identical distribution of training and test data, failing to adapt to source–target domain differences in industrial scenarios and limiting generalization. They also struggle to explore inter-domain correlations with scarce labeled target samples, leading to poor convergence and generalization. To address this, our paper proposes a cross-domain few-shot intelligent fault diagnosis method based on Mixup data augmentation. Firstly, a Mixup data augmentation method is used to linearly combine source domain and target domain data in a specific proportion to generate mixed-domain data, enabling the model to learn correlations and features between data from different domains and improving its generalization ability in cross-domain few-shot learning tasks. Secondly, a feature decoupling module based on the self-attention mechanism is proposed to extract domain-independent features and domain-related features, allowing the model to further reduce the domain distribution gap and effectively generalize source domain knowledge to the target domain. Then, the model parameters are optimized through a multi-task learning mechanism consisting of sample classification tasks and domain classification tasks. Finally, applications in classification tasks on multiple sets of equipment fault datasets show that the proposed method can significantly improve the fault recognition ability of the diagnosis model under the conditions of large distribution differences in the target domain and scarce labeled samples. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

Back to TopTop