Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (125)

Search Parameters:
Keywords = automated partitioning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8147 KB  
Article
Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea
by Gali Dekel, Ran Novitsky Nof, Ron Sarafian and Yinon Rudich
Remote Sens. 2026, 18(2), 211; https://doi.org/10.3390/rs18020211 - 8 Jan 2026
Viewed by 243
Abstract
The Dead Sea (DS) region has experienced a sharp increase in sinkhole formation in recent years, posing environmental and infrastructure risks. The Geological Survey of Israel (GSI) employs Interferometric Synthetic Aperture Radar (InSAR) to monitor sinkhole activity and manually map land subsidence along [...] Read more.
The Dead Sea (DS) region has experienced a sharp increase in sinkhole formation in recent years, posing environmental and infrastructure risks. The Geological Survey of Israel (GSI) employs Interferometric Synthetic Aperture Radar (InSAR) to monitor sinkhole activity and manually map land subsidence along the western shore of the DS. This process is both time-consuming and prone to human error. Automating detection with Deep Learning (DL) offers a transformative opportunity to enhance monitoring precision, scalability, and real-time decision-making. DL segmentation architectures such as UNet, Attention UNet, SAM, TransUNet, and SegFormer have shown effectiveness in learning geospatial deformation patterns in InSAR and related remote sensing data. This study provides a first comprehensive evaluation of a DL segmentation model applied to InSAR data for detecting land subsidence areas that occur as part of the sinkhole-formation process along the western shores of the DS. Unlike image-based tasks, our new model learns interferometric phase patterns that capture subtle ground deformations rather than direct visual features. As the ground truth in the supervised learning process, we use subsidence areas delineated on the phase maps by the GSI team over the years as part of the operational subsidence surveillance and monitoring activities. This unique data poses challenges for annotation, learning, and interpretability, making the dataset both non-trivial and valuable for advancing research in applied remote sensing and its application in the DS. We train the model across three partition schemes, each representing a different type and level of generalization, and introduce object-level metrics to assess its detection ability. Our results show that the model effectively identifies and generalizes subsidence areas in InSAR data across different setups and temporal conditions and shows promising potential for geographical generalization in previously unseen areas. Finally, large-scale subsidence trends are inferred by reconstructing smaller-scale patches and evaluated for different confidence thresholds. Full article
Show Figures

Figure 1

29 pages, 11786 KB  
Article
Reservoir Identification from Well-Logging Data Using a Focal Loss-Enhanced Convolutional Neural Network: A Case Study from the Chang 8 Formation, Ordos Basin
by Wenbo Li, Dongtao Li, Zhenkai Zhang, Zenglin Hong and Lingyi Liu
Processes 2026, 14(1), 157; https://doi.org/10.3390/pr14010157 - 2 Jan 2026
Viewed by 329
Abstract
Accurate reservoir identification from well-logging data is crucial for hydrocarbon exploration, yet challenges persist due to a series of factors, including limitations such as low efficiency and subjectivity of manual processing for massive datasets, as well as class imbalance and its impact on [...] Read more.
Accurate reservoir identification from well-logging data is crucial for hydrocarbon exploration, yet challenges persist due to a series of factors, including limitations such as low efficiency and subjectivity of manual processing for massive datasets, as well as class imbalance and its impact on machine learning model training. This study develops an intelligent identification model using a Convolutional Neural Network (CNN) enhanced with Focal Loss, applied to real well-logging data from the Chang 8 Member of the Yanchang Formation in the Jiyuan Oilfield, Ordos Basin. A well-based data partitioning strategy is adopted to ensure the model’s generalization ability to new wells, avoiding the overoptimistic performance associated with random sample splitting. Experimental results demonstrate that the proposed model achieves an Accuracy of 84% and a Recall of 83% for oil-bearing layers. In comparison, the Random Forest model achieves a lower Recall of 56% for oil-bearing layers, and the CNN-LSTM model achieves 77%. The key influential well-logging parameters identified are bulk density (DEN), spontaneous potential (SP), true resistivity (RT), and natural gamma ray (GR). The findings confirm that the Focal Loss-enhanced CNN effectively mitigates class imbalance issues and provides a reliable, automated method for reservoir identification, offering significant practical value for the secondary interpretation of well logs in similar tight sandstone reservoirs. Full article
Show Figures

Figure 1

24 pages, 1035 KB  
Article
XT-Hypergraph-Based Decomposition and Implementation of Concurrent Control Systems Modeled by Petri Nets
by Łukasz Stefanowicz, Paweł Majdzik and Marcin Witczak
Appl. Sci. 2026, 16(1), 340; https://doi.org/10.3390/app16010340 - 29 Dec 2025
Viewed by 180
Abstract
This paper presents an integrated approach to the structural decomposition of concurrent control systems using exact transversal hypergraphs (XT-hypergraphs). The proposed method combines formal properties of XT-hypergraphs with invariant-based Petri net analysis to enable automatic partitioning of complex, concurrent specifications into deterministic and [...] Read more.
This paper presents an integrated approach to the structural decomposition of concurrent control systems using exact transversal hypergraphs (XT-hypergraphs). The proposed method combines formal properties of XT-hypergraphs with invariant-based Petri net analysis to enable automatic partitioning of complex, concurrent specifications into deterministic and independent components. The approach focuses on preserving behavioral correctness while minimizing inter-component dependencies and computational complexity. By exploiting the uniqueness of minimal transversal covers, reducibility, and structural stability of XT-hypergraphs, the method achieves a deterministic decomposition process with polynomial-delay generation of exact transversals. The research provides practical insights into the construction, reduction, and classification of XT structures, together with quality metrics evaluating decomposition efficiency and structural compactness. The developed methodology is validated on representative real-world control and embedded systems, showing its applicability in deterministic modeling, analysis, and implementation of concurrent architectures. Future work includes the integration of XT-hypergraph algorithms with adaptive decomposition and verification frameworks to enhance scalability and automation in modern system design and integration with currently popular AI and machine learning methods. Full article
Show Figures

Figure 1

33 pages, 9875 KB  
Article
An Adaptive Optimization Method for Moored Buoy Site Selection Integrating Ontology Reasoning and Numerical Computation
by Miaomiao Song, Haihui Song, Shixuan Liu, Xiao Fu, Bin Miao, Wenqing Li, Keke Zhang, Wei Hu and Xingkun Yan
J. Mar. Sci. Eng. 2025, 13(12), 2401; https://doi.org/10.3390/jmse13122401 - 18 Dec 2025
Viewed by 213
Abstract
With the growing diversity and complexity of marine monitoring requirements, the scientific deployment of moored buoys has attracted increasing attention. To address the limitations of traditional methods—such as inconsistent knowledge representation, insufficient logical reasoning capacity, and poor adaptability to dynamic marine environments—this study [...] Read more.
With the growing diversity and complexity of marine monitoring requirements, the scientific deployment of moored buoys has attracted increasing attention. To address the limitations of traditional methods—such as inconsistent knowledge representation, insufficient logical reasoning capacity, and poor adaptability to dynamic marine environments—this study proposes an adaptive optimization method for moored buoy site selection integrating ontology reasoning and numerical computation. The proposed approach constructs an ontology model covering key concepts such as buoy specifications, monitoring objectives, and deployment requirements, and further defines formalized reasoning rules to enable automated judgment of deployment feasibility, sensor configuration, and spatial conflict resolution for moored buoy siting. Based on this semantic framework, a spatio-temporal comprehensive variation index (STCVI) is established by integrating temperature, salinity, and current velocity to characterize dynamic oceanographic conditions. Furthermore, a coverage-first greedy algorithm is designed to determine buoy deployment locations, enabling dynamic optimization and environmental adaptability of the buoy station layout. To verify the feasibility and adaptability of the proposed method, simulation experiments are conducted in the Beibu Gulf. Two layout scenarios—an appending layout with existing buoys and an independent layout without existing buoys—are designed to test the method’s adaptability under different deployment conditions. By combining Voronoi spatial partitioning and nearest-neighbor distance analysis, the optimized results are quantitatively evaluated in terms of spatial uniformity and observational effectiveness. The results indicate that the proposed method effectively enhances the spatial rationality and monitoring efficiency of buoy deployment, demonstrating strong generality and scalability. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 17034 KB  
Article
From CT Imaging to 3D Representations: Digital Modelling of Fibre-Reinforced Adhesives with Image-Based FEM
by Abdul Wasay Khan, Kaixin Xu, Nikolas Manousides and Claudio Balzani
Adhesives 2025, 1(4), 14; https://doi.org/10.3390/adhesives1040014 - 3 Dec 2025
Viewed by 349
Abstract
Short fibre-reinforced adhesives (SFRAs) are increasingly used in wind turbine blades to enhance stiffness and fatigue resistance, yet their heterogeneous microstructure poses significant challenges for predictive modelling. This study presents a fully automated digital workflow that integrates micro-computed tomography (µCT), image processing, and [...] Read more.
Short fibre-reinforced adhesives (SFRAs) are increasingly used in wind turbine blades to enhance stiffness and fatigue resistance, yet their heterogeneous microstructure poses significant challenges for predictive modelling. This study presents a fully automated digital workflow that integrates micro-computed tomography (µCT), image processing, and finite element modelling (FEM) to investigate the mechanical response of SFRAs. Our aim is also to establish a computational foundation for data-driven modelling and future AI surrogates of adhesive joints in wind turbine blades. High-resolution µCT scans were denoised and segmented using a hybrid non-local means and Gaussian filtering pipeline combined with Otsu thresholding and convex hull separation, enabling robust fibre identification and orientation analysis. Two complementary modelling strategies were employed: (i) 2D slice-based FEM models to rapidly assess microstructural effects on stress localisation and (ii) 3D voxel-based FEM models to capture the full anisotropic fibre network. Linear elastic simulations were conducted under inhomogeneous uniaxial extension and torsional loading, revealing interfacial stress hotspots at fibre tips and narrow ligaments. Fibre clustering and alignment strongly influenced stress partitioning between fibres and the matrix, while isotropic regions exhibited diffuse, matrix-dominated load transfer. The results demonstrate that image-based FEM provides a powerful route for structure–property modelling of SFRAs and establish a scalable foundation for digital twin development, reliability assessment, and integration with physics-informed surrogate modelling frameworks. Full article
Show Figures

Figure 1

28 pages, 4441 KB  
Article
Automated 3D Building Model Reconstruction from Satellite Images Using Two-Stage Polygon Decomposition and Adaptive Roof Fitting
by Shuting Yang, Hao Chen and Puxi Huang
Remote Sens. 2025, 17(23), 3832; https://doi.org/10.3390/rs17233832 - 27 Nov 2025
Viewed by 543
Abstract
Digital surface models (DSMs) derived from high-resolution satellite imagery often contain mismatches, voids, and coarse building geometry, limiting their suitability for accurate and standardized 3D reconstruction. The scarcity of finely annotated samples further constrains generalization to complex structures. To address these challenges, an [...] Read more.
Digital surface models (DSMs) derived from high-resolution satellite imagery often contain mismatches, voids, and coarse building geometry, limiting their suitability for accurate and standardized 3D reconstruction. The scarcity of finely annotated samples further constrains generalization to complex structures. To address these challenges, an automated building reconstruction method based on two-stage polygon decomposition and adaptive roof fitting is proposed. Building polygons are first extracted and standardized to preserve primary contours while improving geometric regularity. A two-stage decomposition is then applied. In the first stage, polygons are coarsely decomposed, and redundant rectangles are removed by analyzing containment relationships. In the second stage, non-flat regions are identified and further decomposed to accommodate complex building connections. For 3D model fitting, flat-roof buildings are reconstructed by integrating structural analysis of DSM elevation distributions with adaptive rooftop partitioning, which enables accurate modeling of complex flat structures with auxiliary components. For non-flat roofs, a representative parameter space is defined and explored through systematic search and optimization to obtain precise fits. Finally, intersecting primitives are normalized and optimally merged to ensure structural coherence and standardized representation. Experiments on the US3D, MVS3D, and Beijing-3 datasets demonstrate that the proposed method achieves higher geometric accuracy and more standardized models, with an average IOU3 of 91.26%, RMSE of 0.78 m, and MHE of 0.22 m. Full article
Show Figures

Graphical abstract

31 pages, 9718 KB  
Article
Beyond “One-Size-Fits-All”: Estimating Driver Attention with Physiological Clustering and LSTM Models
by Juan Camilo Peña, Evelyn Vásquez, Guiselle A. Feo-Cediel, Alanis Negroni and Juan Felipe Medina-Lee
Electronics 2025, 14(23), 4655; https://doi.org/10.3390/electronics14234655 - 26 Nov 2025
Viewed by 462
Abstract
In the dynamic and complex environment of highly automated vehicles, ensuring driver safety is the most critical task. While automation promises to reduce human error, the driver’s role is shifting to that of a teammate who must remain vigilant and ready to intervene, [...] Read more.
In the dynamic and complex environment of highly automated vehicles, ensuring driver safety is the most critical task. While automation promises to reduce human error, the driver’s role is shifting to that of a teammate who must remain vigilant and ready to intervene, making it essential to monitor their attention level. However, a significant challenge in this domain is the considerable inter-individual variability in how people physiologically respond to cognitive states, such as distraction. This study addresses this by developing a methodology that first groups drivers into distinct physiology-based clusters before training a predictive model. The study was conducted in a high-fidelity driving simulator, where multimodal data streams, including heart rate variability and electrodermal activity, were collected from 30 participants during conditional-automated driving experiments. Using a time-series k-means clustering algorithm, the researchers successfully partitioned the drivers into clusters based on their physiological and behavioral patterns, which did not correlate with demographic factors. Then, a Long Short-Term Memory model was trained for each cluster, which achieved similar predictive performance compared to a single, generalized model. This finding demonstrates that a personalized, cluster-based approach is feasible for physiology-based driver monitoring, providing a robust and replicable solution for developing accurate and reliable attention estimation systems. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

22 pages, 6748 KB  
Article
Automated 3D Reconstruction of Interior Structures from Unstructured Point Clouds
by Youssef Hany, Wael Ahmed, Adel Elshazly, Ahmad M. Senousi and Walid Darwish
ISPRS Int. J. Geo-Inf. 2025, 14(11), 428; https://doi.org/10.3390/ijgi14110428 - 31 Oct 2025
Viewed by 1374
Abstract
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D [...] Read more.
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D models and 2D floor plans from unstructured indoor point clouds. The approach begins with point cloud preprocessing using voxel-based downsampling and robust statistical outlier removal. Room partitions are extracted via DBSCAN applied in the 2D space, followed by structural segmentation using the RandLA-Net deep learning model to classify key building components such as walls, floors, ceilings, columns, doors, and windows. To enhance segmentation fidelity, a density-based filtering technique is employed, and RANSAC is utilized to detect and fit planar primitives representing major surfaces. Wall-surface openings such as doors and windows are identified through local histogram analysis and interpolation in wall-aligned coordinate systems. The method supports complex indoor environments including Manhattan and non-Manhattan layouts, variable ceiling heights, and cluttered scenes with occlusions. The approach was validated using six datasets with varying architectural characteristics, and evaluated using completeness, correctness, and accuracy metrics. Results show a minimum completeness of 86.6%, correctness of 84.8%, and a maximum geometric error of 9.6 cm, demonstrating the robustness and generalizability of the proposed pipeline for automated as-built BIM reconstruction. Full article
Show Figures

Figure 1

16 pages, 2331 KB  
Article
Development of an Automated Multistage Countercurrent Extraction System and Its Application in the Extraction of Phenolic Acids
by Yuxuan Feng, Qinglin Wang, Guanglei Zuo and Xingchu Gong
Separations 2025, 12(11), 291; https://doi.org/10.3390/separations12110291 - 23 Oct 2025
Cited by 1 | Viewed by 613
Abstract
This study developed an automated multistage countercurrent extraction device and applied it to the separation and extraction of phenolic acids—including neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, isochlorogenic acid A, isochlorogenic acid B, and isochlorogenic acid C—from an aqueous extract of Lonicera japonica Thunb. [...] Read more.
This study developed an automated multistage countercurrent extraction device and applied it to the separation and extraction of phenolic acids—including neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, isochlorogenic acid A, isochlorogenic acid B, and isochlorogenic acid C—from an aqueous extract of Lonicera japonica Thunb. The extraction process was optimized by systematically evaluating critical parameters such as liquid–liquid equilibrium pH, internal diameter of the tee connector, phase flow rate ratio, and the number of extraction stages. The apparent partition coefficients of all six phenolic acids increased with decreasing aqueous pH, with fitted pKa values ranging from 3.7 to 4.3. A reduction in tee diameter (0.75 mm) was found to enhance mass transfer efficiency. Increasing the flowrate of both phases (20 mL/min), the organic-to-aqueous phase ratio (4:1), and the number of extraction stages (3 stages) significantly improved both stage efficiency and overall extraction yield. Under optimized conditions, the target chlorogenic acids were efficiently enriched, with their total content increasing from 50.3 mg/g to 70.1 mg/g in the solid residue after three countercurrent stages. The automated multistage countercurrent extraction system demonstrated robust performance, suggesting promising potential for applications in the preparation of traditional Chinese medicine ingredients or as an automated sample pretreatment method in analytical workflows. This study provides a novel and green technological solution for efficient separation of complex TCM systems. Full article
Show Figures

Figure 1

31 pages, 3570 KB  
Article
Optimization of the Human–Robot Collaborative Disassembly Process Using a Genetic Algorithm: Application to the Reconditioning of Electric Vehicle Batteries
by Salma Nabli, Gilde Vanel Tchane Djogdom and Martin J.-D. Otis
Designs 2025, 9(5), 122; https://doi.org/10.3390/designs9050122 - 17 Oct 2025
Viewed by 2749
Abstract
To achieve a complete circular economy for used electric vehicle batteries, it is essential to implement a disassembly step. Given the significant diversity of battery geometries and designs, a high degree of flexibility is required for automated disassembly processes. The incorporation of human–robot [...] Read more.
To achieve a complete circular economy for used electric vehicle batteries, it is essential to implement a disassembly step. Given the significant diversity of battery geometries and designs, a high degree of flexibility is required for automated disassembly processes. The incorporation of human–robot interaction provides a valuable degree of flexibility in the process workflow. However, human behavior is characterized by unpredictable timing and variable task durations, which add considerable complexity to process planning. Therefore, it is crucial to develop a robust strategy for coordinating human and robotic tasks to manage the scheduling of production activities efficiently. This study proposes a global optimization approach to the scheduling of production activities, which employs a genetic algorithm with the objective of minimizing the total production time while simultaneously reducing the idle time of both the human operator and robot. The proposed approach is concerned with optimizing the sequencing of disassembly tasks, considering both temporal and exclusion constraints, to guarantee that tasks deemed hazardous are not executed in the presence of a human. This approach is based on a two-level adaptation framework developed in RoboDK (Robot Development Kit, v5.4.3.22231, 2022, RoboDK Inc., Montréal, QC Canada). At the first level, offline optimization is performed using a genetic algorithm to determine the optimal task sequencing strategy. This stage anticipates human behavior by proposing disassembly sequences aligned with expected human availability. At the second level, an online reactive adjustment refines the plan in real time, adapting it to actual human interventions and compensating for deviations from initial forecasts. The effectiveness of this global optimization strategy is evaluated against a non-global approach, in which the problem is partitioned into independent subproblems solved separately and then integrated. The results demonstrate the efficacy of the proposed approach in comparison with a non-global approach, particularly in scenarios where humans arrive earlier than anticipated. Full article
Show Figures

Figure 1

20 pages, 23077 KB  
Article
An Integrated Experimental System for Unmanned Underwater Vehicle Swarm Control
by Yutao Chen, Xingwei Zhou, Wenshan Hu and Bo Zhao
Sensors 2025, 25(20), 6413; https://doi.org/10.3390/s25206413 - 17 Oct 2025
Cited by 1 | Viewed by 663
Abstract
Unmanned Underwater Vehicle (UUV) swarms have become increasingly crucial for underwater exploration and applications, where their coordinated operation offers significant advantages over single-vehicle systems. However, unlike single-vehicle systems, the development of swarm control systems is more complicated, especially because there are limited integrated [...] Read more.
Unmanned Underwater Vehicle (UUV) swarms have become increasingly crucial for underwater exploration and applications, where their coordinated operation offers significant advantages over single-vehicle systems. However, unlike single-vehicle systems, the development of swarm control systems is more complicated, especially because there are limited integrated toolchains that can cover both global scheme design and individual vehicle implementation. Engineers may have to develop a global scheme and then partition it manually for individual vehicle implementation, which can result in substantial efficiency losses. To address this difficulty, an integrated experimental framework is developed to support the complete workflow of UUV swarm control development, from unified algorithm design and system simulation to automated code generation and individual deployment. The architecture of the proposed platform incorporates three principal elements: a global simulation environment that enables virtual validation of swarm collective behavior, a rapid prototyping module that facilitates code generation/partitioning and individual implementation, and a digital twin visualization component that provides real-time monitoring capabilities. A case study demonstrates that the platform can integrate global design with individual implementation. In a comparative experiment where the same engineering team implemented a three-UUV formation control algorithm, the use of our platform reduced the time from algorithm design to successful deployment from an estimated 6 h (using manual coding and integration) to under one hour, representing about an 80% reduction in development time. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

13 pages, 1389 KB  
Article
Could ChatGPT Automate Water Network Clustering? A Performance Assessment Across Algorithms
by Ludovica Palma, Enrico Creaco, Michele Iervolino, Davide Marocco, Giovanni Francesco Santonastaso and Armando Di Nardo
Water 2025, 17(20), 2995; https://doi.org/10.3390/w17202995 - 17 Oct 2025
Viewed by 626
Abstract
Water distribution networks (WDNs) are characterized by complex challenges in management and optimization, especially in ensuring efficiency, reducing losses, and maintaining infrastructure performances. The recent advancements in Artificial Intelligence (AI) techniques based on Large Language Models, particularly ChatGPT 4.0 (a chatbot based on [...] Read more.
Water distribution networks (WDNs) are characterized by complex challenges in management and optimization, especially in ensuring efficiency, reducing losses, and maintaining infrastructure performances. The recent advancements in Artificial Intelligence (AI) techniques based on Large Language Models, particularly ChatGPT 4.0 (a chatbot based on a generative pre-trained model), offer potential solutions to streamline these processes. This study investigates the ability of ChatGPT to perform the clustering phase of WDN partitioning, a critical step for dividing large networks into manageable clusters. Using a real Italian network as a case study, ChatGPT was prompted to apply several clustering algorithms, including k-means, spectral, and hierarchical clustering. The results show that ChatGPT uniquely adds value by automating the entire workflow of WDN clustering—from reading input files and running algorithms to calculating performance indices and generating reports. This makes advanced water network partitioning accessible to users without programming or hydraulic modeling expertise. The study highlights ChatGPT’s role as a complementary tool: it accelerates repetitive tasks, supports decision-making with interpretable outputs, and lowers the entry barrier for utilities and practitioners. These findings demonstrate the practical potential of integrating large language models into water management, where they can democratize specialized methodologies and facilitate wider adoption of WDN managing strategies. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

38 pages, 13748 KB  
Article
MH-WMG: A Multi-Head Wavelet-Based MobileNet with Gated Linear Attention for Power Grid Fault Diagnosis
by Yousef Alkhanafseh, Tahir Cetin Akinci, Alfredo A. Martinez-Morales, Serhat Seker and Sami Ekici
Appl. Sci. 2025, 15(20), 10878; https://doi.org/10.3390/app152010878 - 10 Oct 2025
Viewed by 858
Abstract
Artificial intelligence is increasingly embedded in power systems to boost efficiency, reliability, and automation. This study introduces an end-to-end, AI-driven fault-diagnosis pipeline built around a Multi-Head Wavelet-based MobileNet with Gated Linear Attention (MH-WMG). The network takes time-series signals converted into images as input [...] Read more.
Artificial intelligence is increasingly embedded in power systems to boost efficiency, reliability, and automation. This study introduces an end-to-end, AI-driven fault-diagnosis pipeline built around a Multi-Head Wavelet-based MobileNet with Gated Linear Attention (MH-WMG). The network takes time-series signals converted into images as input and branches into three heads that, respectively, localize the fault area, classify the fault type, and predict the distance bin for all short-circuit faults. Evaluation employs the canonical Kundur two-area four-machine system, partitioned into six regions, twelve fault scenarios (including normal operation), and twelve predefined distance bins. MH-WMG achieves high performance: perfect accuracy, precision, recall, and F1 (1.00) for fault-area detection; strong fault-type identification (accuracy = 0.9604, precision = 0.9625, recall = 0.9604, and F1 = 0.9601); and robust distance-bin prediction (accuracy = 0.8679, precision = 0.8725, recall = 0.8679, and F1 = 0.8690). The model is compact and fast (2.33 M parameters, 44.14 ms latency, 22.66 images/s) and outperforms baselines in both accuracy and efficiency. The pipeline decisively outperforms conventional time-series methods. By rapidly pinpointing and classifying faults with high fidelity, it enhances grid resilience, reduces operational risk, and enables more stable, intelligent operation, demonstrating the value of AI-driven fault detection for future power-system reliability. Full article
Show Figures

Figure 1

15 pages, 1797 KB  
Article
Exploring AI’s Potential in Papilledema Diagnosis to Support Dermatological Treatment Decisions in Rural Healthcare
by Jonathan Shapiro, Mor Atlas, Naomi Fridman, Itay Cohen, Ziad Khamaysi, Mahdi Awwad, Naomi Silverstein, Tom Kozlovsky and Idit Maharshak
Diagnostics 2025, 15(19), 2547; https://doi.org/10.3390/diagnostics15192547 - 9 Oct 2025
Viewed by 934
Abstract
Background: Papilledema, an ophthalmic finding associated with increased intracranial pressure, is often induced by dermatological medications, including corticosteroids, isotretinoin, and tetracyclines. Early detection is crucial for preventing irreversible optic nerve damage, but access to ophthalmologic expertise is often limited in rural settings. [...] Read more.
Background: Papilledema, an ophthalmic finding associated with increased intracranial pressure, is often induced by dermatological medications, including corticosteroids, isotretinoin, and tetracyclines. Early detection is crucial for preventing irreversible optic nerve damage, but access to ophthalmologic expertise is often limited in rural settings. Artificial intelligence (AI) may enable the automated and accurate detection of papilledema from fundus images, thereby supporting timely diagnosis and management. Objective: The primary objective of this study was to explore the diagnostic capability of ChatGPT-4o, a general large language model with multimodal input, in identifying papilledema from fundus photographs. For context, its performance was compared with a ResNet-based convolutional neural network (CNN) specifically fine-tuned for ophthalmic imaging, as well as with the assessments of two human ophthalmologists. The focus was on applications relevant to dermatological care in resource-limited environments. Methods: A dataset of 1094 fundus images (295 papilledema, 799 normal) was preprocessed and partitioned into a training set and a test set. The ResNet model was fine-tuned using discriminative learning rates and a one-cycle learning rate policy. GPT-4o and two human evaluators (a senior ophthalmologist and an ophthalmology resident) independently assessed the test images. Diagnostic metrics including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and Cohen’s Kappa, were calculated for each evaluator. Results: GPT-4o, when applied to papilledema detection, achieved an overall accuracy of 85.9% with substantial agreement beyond chance (Cohen’s Kappa = 0.72), but lower specificity (78.9%) and positive predictive value (73.7%) compared to benchmark models. For context, the ResNet model, fine-tuned for ophthalmic imaging, reached near-perfect accuracy (99.5%, Kappa = 0.99), while two human ophthalmologists achieved accuracies of 96.0% (Kappa ≈ 0.92). Conclusions: This study explored the capability of GPT-4o, a large language model with multimodal input, for detecting papilledema from fundus photographs. GPT-4o achieved moderate diagnostic accuracy and substantial agreement with the ground truth, but it underperformed compared to both a domain-specific ResNet model and human ophthalmologists. These findings underscore the distinction between generalist large language models and specialized diagnostic AI: while GPT-4o is not optimized for ophthalmic imaging, its accessibility, adaptability, and rapid evolution highlight its potential as a future adjunct in clinical screening, particularly in underserved settings. These findings also underscore the need for validation on external datasets and real-world clinical environments before such tools can be broadly implemented. Full article
(This article belongs to the Special Issue AI in Dermatology)
Show Figures

Figure 1

23 pages, 2788 KB  
Article
Green Cores as Architectural and Environmental Anchors: A Performance-Based Framework for Residential Refurbishment in Novi Sad, Serbia
by Marko Mihajlovic, Jelena Atanackovic Jelicic and Milan Rapaic
Sustainability 2025, 17(19), 8864; https://doi.org/10.3390/su17198864 - 3 Oct 2025
Viewed by 936
Abstract
This research investigates the integration of green cores as central biophilic elements in residential architecture, proposing a climate-responsive design methodology grounded in architectural optimization. The study begins with the full-scale refurbishment of a compact urban apartment, wherein interior partitions, fenestration and material systems [...] Read more.
This research investigates the integration of green cores as central biophilic elements in residential architecture, proposing a climate-responsive design methodology grounded in architectural optimization. The study begins with the full-scale refurbishment of a compact urban apartment, wherein interior partitions, fenestration and material systems were reconfigured to embed vegetated zones within the architectural core. Light exposure, ventilation potential and spatial coherence were maximized through data-driven design strategies and structural modifications. Integrated planting modules equipped with PAR-specific LED systems ensure sustained vegetation growth, while embedded environmental infrastructure supports automated irrigation and continuous microclimate monitoring. This plant-centered spatial model is evaluated using quantifiable performance metrics, establishing a replicable framework for optimized indoor ecosystems. Photosynthetically active radiation (PAR)-specific LED systems and embedded environmental infrastructure were incorporated to maintain vegetation viability and enable microclimate regulation. A programmable irrigation system linked to environmental sensors allows automated resource management, ensuring efficient plant sustenance. The configuration is assessed using measurable indicators such as daylight factor, solar exposure, passive thermal behavior and similar elements. Additionally, a post-occupancy expert assessment was conducted with several architects evaluating different aspects confirming the architectural and spatial improvements achieved through the refurbishment. This study not only demonstrates a viable architectural prototype but also opens future avenues for the development of metabolically active buildings, integration with decentralized energy and water systems, and the computational optimization of living infrastructure across varying climatic zones. Full article
(This article belongs to the Special Issue Advances in Ecosystem Services and Urban Sustainability, 2nd Edition)
Show Figures

Figure 1

Back to TopTop