Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,653)

Search Parameters:
Keywords = nature_architecture model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5802 KB  
Article
Integrating Land-Use Modeling with Coastal Landscape Interventions: A Framework for Climate Adaptation Planning in Dalian, China
by Bo Pang and Brian Deal
Sustainability 2026, 18(1), 370; https://doi.org/10.3390/su18010370 - 30 Dec 2025
Abstract
Coastal cities face escalating flood risk under sea-level rise, yet landscape-based adaptation strategies often remain speculative and weakly connected to the accessibility and economic constraints that shape sustainable urban development. This study developed a modeling-to-design framework that translates coupled climate and land-use projections [...] Read more.
Coastal cities face escalating flood risk under sea-level rise, yet landscape-based adaptation strategies often remain speculative and weakly connected to the accessibility and economic constraints that shape sustainable urban development. This study developed a modeling-to-design framework that translates coupled climate and land-use projections into implementable landscape interventions, through planning-level spatial allocation, using Dalian, China as a case study under “middle of the road” (SSP2-4.5) climate conditions. The framework integrates the Land-use Evolution and Assessment Model (LEAM) with connected-bathtub flood modeling to evaluate whether strategic landscape design can redirect development away from flood-prone zones while accommodating projected growth and maintaining accessibility to employment and services. Interventions—protective wetland restoration (810 km2) and blue–green corridors (8 km2)—derived from a meta-synthesis of implemented coastal projects were operationalized as LEAM spatial constraints. Our results show that residential development can be redirected away from coastal risk with 100% demand satisfaction and elimination of moderate-risk allocations. Cropland demand was fully accommodated. In contrast, commercial development experienced 99.8% reduction under strict coastal protection, reflecting locational dependence on port-adjacent sites. This modeling-to-design framework offers a transferable approach to quantifying where landscape interventions succeed, where they face barriers, and where complementary measures are required, supporting decision-making that balances environmental protection, economic function, and social accessibility in sustainable coastal development. Full article
(This article belongs to the Special Issue Socially Sustainable Urban and Architectural Design)
Show Figures

Figure 1

15 pages, 4385 KB  
Article
A New Approach to Palaeontological Exhibition in Public Space: Revitalizing Disappearing Knowledge of Extinct Species
by Anna Chrobak-Žuffová, Marta Bąk, Agnieszka Ciurej, Piotr Strzeboński, Ewa Welc, Sławomir Bębenek, Anna Wolska, Karol Augustowski and Krzysztof Bąk
Resources 2026, 15(1), 7; https://doi.org/10.3390/resources15010007 - 29 Dec 2025
Viewed by 22
Abstract
This paper presents an innovative concept for the musealization of everyday public space through the use of natural stone cladding as an in situ palaeontological exhibition. Polished slabs of Holy Cross Mts marble, widely used as flooring in public buildings, contain abundant and [...] Read more.
This paper presents an innovative concept for the musealization of everyday public space through the use of natural stone cladding as an in situ palaeontological exhibition. Polished slabs of Holy Cross Mts marble, widely used as flooring in public buildings, contain abundant and well-preserved Devonian marine fossils, offering a unique opportunity to revitalize public engagement with palaeontology and geoheritage. The proposed exhibition transforms passers-by into active observers by integrating authentic fossil material directly into daily circulation routes, thereby emphasizing the educational and geotouristic potential of ordinary architectural elements. The case study focuses on the main hall of the University of the National Education Commission (Kraków, Poland), where over 1000 m2 of fossil-bearing limestone flooring is used as a continuous exhibition surface. The target audience includes students of Earth sciences, zoology, biological sciences, pedagogy, social sciences, and humanities, for whom the exhibition serves as both an educational supplement and a geotouristic experience. The scientific, educational, and touristic value of the proposed exhibition was assessed using a modified geoheritage valorization method and compared with established palaeontological collections in Kraków and Kielce. The expert valuation method used in the article enables a comparison of the described collection with other similar places on Earth, making its application universal and global. The results demonstrate that polished stone cladding can function as a valuable geoheritage asset of regional and global significance, offering an accessible, low-cost, and sustainable model for disseminating palaeontological knowledge within public space. Full article
Show Figures

Figure 1

21 pages, 571 KB  
Review
Hydrogels for Osteochondral Interface Regeneration: Biomaterial Types, Processes, and Animal Models
by Sanazar Kadyr, Bakhytbol Khumyrzakh, Swera Naz, Albina Abdossova, Bota Askarbek, Dilhan M. Kalyon, Zhe Liu and Cevat Erisken
Gels 2026, 12(1), 24; https://doi.org/10.3390/gels12010024 - 27 Dec 2025
Viewed by 277
Abstract
The osteochondral interface (OCI) is a structurally and functionally complex tissue whose degeneration or injury often results in poor healing and joint dysfunction due to its avascular and hypocellular nature. Conventional surgical treatments remain suboptimal, prompting growing interest in regenerative approaches, particularly with [...] Read more.
The osteochondral interface (OCI) is a structurally and functionally complex tissue whose degeneration or injury often results in poor healing and joint dysfunction due to its avascular and hypocellular nature. Conventional surgical treatments remain suboptimal, prompting growing interest in regenerative approaches, particularly with the utilization of hydrogel-based biomaterials that can mimic the extracellular matrix and support osteochondral regeneration. This study reviewed types of hydrogels, scaffold processing techniques, and animal models for OCI regeneration. Our search demonstrated that gelatin, alginate, chitosan, and hyaluronic acid were the most frequently investigated hydrogels. Layered constructs dominated current scaffold designs, while advanced methods such as 3D printing and extrusion demonstrated unique potential to create graded architectures resembling the native OCI. Rabbits were the most widely used in vivo models, though translation will require larger animal studies with clinically relevant defect sizes. Future efforts should focus on developing mechanically reinforced, biologically active, and continuously graded hydrogels, supported by standardized preclinical validation in large-animal models, to accelerate translation toward clinical solutions for osteochondral regeneration. Full article
Show Figures

Figure 1

14 pages, 61684 KB  
Article
A CMOS-Compatible Silicon Nanowire Array Natural Light Photodetector with On-Chip Temperature Compensation Using a PSO-BP Neural Network
by Mingbin Liu, Xin Chen, Jiaye Zeng, Jintao Yi, Wenhe Liu, Xinjian Qu, Junsong Zhang, Haiyan Liu, Chaoran Liu, Xun Yang and Kai Huang
Micromachines 2026, 17(1), 23; https://doi.org/10.3390/mi17010023 - 25 Dec 2025
Viewed by 212
Abstract
Silicon nanowire (SiNW) photodetectors exhibit high sensitivity for natural light detection but suffer from significant performance degradation due to thermal interference. To overcome this limitation, this paper presents a high-performance, CMOS-compatible SiNW array natural light photodetector with monolithic integration of an on-chip temperature [...] Read more.
Silicon nanowire (SiNW) photodetectors exhibit high sensitivity for natural light detection but suffer from significant performance degradation due to thermal interference. To overcome this limitation, this paper presents a high-performance, CMOS-compatible SiNW array natural light photodetector with monolithic integration of an on-chip temperature sensor and an embedded intelligent compensation system. The device, fabricated via microfabrication techniques, features a dual-array architecture that enables simultaneous acquisition of optical and thermal signals, thereby simplifying peripheral circuitry. To achieve high-precision decoupling of the optical and thermal signals, we propose a hybrid temperature compensation algorithm that combines Particle Swarm Optimization (PSO) with a Back Propagation (BP) neural network. The PSO algorithm optimizes the initial weights and thresholds of the BP network, effectively preventing the network from getting trapped in local minima and accelerating the training process. Experimental results demonstrate that the proposed PSO-BP model achieves superior compensation accuracy and a significantly faster convergence rate compared to the traditional BP network. Furthermore, the optimized model was successfully implemented on an STM32 microcontroller. This embedded implementation validates the feasibility of real-time, high-accuracy temperature compensation, significantly enhancing the stability and reliability of the photodetector across a wide temperature range. This work provides a viable strategy for developing highly stable and integrated optical sensing systems. Full article
(This article belongs to the Special Issue Emerging Trends in Optoelectronic Device Engineering, 2nd Edition)
Show Figures

Figure 1

22 pages, 2056 KB  
Article
Valorization of Lemon, Apple, and Tangerine Peels and Onion Skins–Artificial Neural Networks Approach
by Biljana Lončar, Aleksandra Cvetanović Kljakić, Jelena Arsenijević, Mirjana Petronijević, Sanja Panić, Svetlana Đogo Mračević and Slavica Ražić
Separations 2026, 13(1), 9; https://doi.org/10.3390/separations13010009 - 24 Dec 2025
Viewed by 172
Abstract
This study focuses on the optimization of modern extraction techniques for selected by-product materials, including apple, lemon, and tangerine peels, and onion skins, using artificial neural network (ANN) models. The extraction methods included ultrasound-assisted extraction (UAE) and microwave-assisted extraction (MAE) with water as [...] Read more.
This study focuses on the optimization of modern extraction techniques for selected by-product materials, including apple, lemon, and tangerine peels, and onion skins, using artificial neural network (ANN) models. The extraction methods included ultrasound-assisted extraction (UAE) and microwave-assisted extraction (MAE) with water as the extractant, as well as maceration (MAC) with natural deep eutectic solvents (NADES). Key parameters, such as total phenolic content (TPC), total flavonoid content (TFC), and antioxidant activities, including reducing power (EC50) and free radical scavenging capacity (IC50), were evaluated to compare the efficiency of each method. Among the techniques, UAE outperformed both MAE and MAC in extracting bioactive compounds, especially from onion skins and tangerine peels, as reflected in the highest TPC, TFC, and antioxidant activity. UAE of onion skins showed the best performance, yielding the highest TPC (5.735 ± 0.558 mg CAE/g) and TFC (1.973 ± 0.112 mg RE/g), along with the strongest antioxidant activity (EC50 = 0.549 ± 0.076 mg/mL; IC50 = 0.108 ± 0.049 mg/mL). Tangerine peel extracts obtained by UAE also exhibited high phenolic content (TPC up to 5.399 ± 0.325 mg CAE/g) and strong radical scavenging activity (IC50 0.118 ± 0.099 mg/mL). ANN models using multilayer perceptron architectures with high coefficients of determination (r2 > 0.96) were developed to predict and optimize the extraction results. Sensitivity and error analyses confirmed the robustness of the models and emphasized the influence of the extraction technique and by-product type on the antioxidant parameters. Principal component and cluster analyses showed clear grouping patterns by extraction method, with UAE and MAE showing similar performance profiles. Overall, these results underline the potential of UAE- and ANN-based modeling for the optimal utilization of agricultural by-products. Full article
Show Figures

Graphical abstract

26 pages, 1143 KB  
Article
Debiasing Session-Based Recommendation for the Digital Economy: Propensity-Aware Training and Temporal Contrast on Graph Transformers
by Yongjian Wang, Junru Si, Xuhua Qiu and Kunjie Zhu
Electronics 2026, 15(1), 84; https://doi.org/10.3390/electronics15010084 - 24 Dec 2025
Viewed by 234
Abstract
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose [...] Read more.
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose the Propensity- and Temporal-consistency Enhanced Graph Transformer (PTE-GT), a principled framework that enhances a recent interval-aware graph transformer backbone with two synergistic training-time modules. This Graph Neural Network -based architecture is adept at modeling the complex, graph-structured nature of session data, capturing intricate item transitions that sequential models might miss. First, we introduce a propensity-aware (PA) optimization objective based on the self-normalized inverse propensity scoring (SNIPS) estimator. This module leverages logs containing randomized exposure or logged behavior-policy propensities to learn an unbiased risk estimate, correcting for the biased data distribution. Second, we design a lightweight, view-free temporal consistency (TC) contrastive regularizer that enforces alignment between session prefixes and suffixes, improving representation robustness without computationally expensive graph augmentations, which are often a bottleneck for graph-based contrastive methods. We conduct comprehensive evaluations on three public session-based benchmarks—KuaiRand, the OTTO e-commerce challenge dataset (OTTO), and the YOOCHOOSE-1/64 split (YOOCHOOSE)—and additionally on the publicly available Open Bandit Dataset (OBD) containing logged bandit propensities. Our results demonstrate that PTE-GT significantly outperforms strong baselines. Critically, on datasets with randomized exposure or logged propensities, our unbiased evaluation protocol, using SNIPS-weighted metrics, reveals a substantial performance leap that is masked by standard, biased metrics. Our method also shows marked improvements in model calibration and long-tail item recommendation. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Graph Neural Networks)
Show Figures

Figure 1

17 pages, 2210 KB  
Article
The Use of a Device to Improve the Evacuation Performance of Hospitalized Non-Self-Sufficient Patients in Healthcare Facilities
by Simone Accorsi, Francesco Ottaviani, Aurora Fabiano and Dimitri Sossai
Safety 2026, 12(1), 3; https://doi.org/10.3390/safety12010003 - 24 Dec 2025
Viewed by 187
Abstract
Background: Fire emergency management in healthcare facilities represents a complex challenge, particularly in historic buildings subject to architectural preservation constraints, where progressive horizontal evacuation is objectively difficult. This study analyzes the effectiveness of an evacuation sheet employed by Hospital Policlinico San Martino to [...] Read more.
Background: Fire emergency management in healthcare facilities represents a complex challenge, particularly in historic buildings subject to architectural preservation constraints, where progressive horizontal evacuation is objectively difficult. This study analyzes the effectiveness of an evacuation sheet employed by Hospital Policlinico San Martino to improve the speed of evacuating non-self-sufficient patients in these buildings. Methods: This study involved evacuation simulations in wards previously selected based on structural characteristics. Healthcare personnel (male and female, aged between 30 and 55 years) conducted both horizontal and vertical patient evacuation drills, comparing the performance of the S-CAPEPOD® Evacuation Sheet (Standard Model) with the conventional method (hospital bed plus and rescue sheet). This study focused on the night shift to evaluate the most critical scenario in terms of human resources. Results: The use of the evacuation sheet proved more efficient than the conventional method throughout the entire evacuation route, especially during the first 15 min of the emergency (the most critical period). Indeed, with an equal number of available personnel, the evacuation sheet enabled an average improvement of 50% in the number of patients evacuated. Conclusions: The data support the effectiveness of the device, confirming the theoretical premise that the introduction of the evacuation sheet—also due to its ease of use—can be an improvement measure for the evacuation performance of non-self-sufficient patients, despite limitations related to structural variability and the simulated nature of the trials. Full article
Show Figures

Figure 1

14 pages, 399 KB  
Article
LAFS: A Fast, Differentiable Approach to Feature Selection Using Learnable Attention
by Hıncal Topçuoğlu, Atıf Evren, Elif Tuna and Erhan Ustaoğlu
Entropy 2026, 28(1), 20; https://doi.org/10.3390/e28010020 - 24 Dec 2025
Viewed by 215
Abstract
Feature selection is a critical preprocessing step for mitigating the curse of dimensionality in machine learning. Existing methods present a difficult trade-off: filter methods are fast but often suboptimal as they evaluate features in isolation, while wrapper methods are powerful but computationally prohibitive [...] Read more.
Feature selection is a critical preprocessing step for mitigating the curse of dimensionality in machine learning. Existing methods present a difficult trade-off: filter methods are fast but often suboptimal as they evaluate features in isolation, while wrapper methods are powerful but computationally prohibitive due to their iterative nature. In this paper, we propose LAFS (Learnable Attention for Feature Selection), a novel, end-to-end differentiable framework that achieves the performance of wrapper methods at the speed of simpler models. LAFS employs a neural attention mechanism to learn a context-aware importance score for all features simultaneously in a single forward pass. To encourage the selection of a sparse and non-redundant feature subset, we introduce a novel hybrid loss function that combines the standard classification objective with an information-theoretic entropic regularizer on the attention weights. We validate our approach on real-world high-dimensional benchmark datasets. Our experiments demonstrate that LAFS successfully identifies complex feature interactions and handles multicollinearity. In general comparison, LAFS achieves very close and accurate results to state-of-the-art RFE-LGBM and embedded FSA methods. Our work establishes a new point on the accuracy-efficiency frontier, demonstrating that attention-based architectures provide a compatible solution to the feature selection problem. Full article
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics, 2nd Edition)
Show Figures

Figure 1

46 pages, 614 KB  
Systematic Review
Intelligent Ventilation and Indoor Air Quality: State of the Art Review (2017–2025)
by Carlos Rizo-Maestre, José María Flores-Moreno, Amor Nebot Sanz and Víctor Echarri-Iribarren
Buildings 2026, 16(1), 65; https://doi.org/10.3390/buildings16010065 - 23 Dec 2025
Viewed by 231
Abstract
Intelligent ventilation is positioned as a key axis for reconciling energy efficiency and indoor air quality (IAQ) in residential and non-residential buildings. This review synthesizes 51 recent publications covering control strategies (DCV, MPC, reinforcement learning), IoT architectures and sensor validation, energy recovery (HRV/ERV, [...] Read more.
Intelligent ventilation is positioned as a key axis for reconciling energy efficiency and indoor air quality (IAQ) in residential and non-residential buildings. This review synthesizes 51 recent publications covering control strategies (DCV, MPC, reinforcement learning), IoT architectures and sensor validation, energy recovery (HRV/ERV, anti-frost strategies, low-loss exchangers, PCM-air), active envelope solutions (thermochromic windows) and passive solutions (EAHE), as well as evaluation methodologies (uncertainty, LCA, LCC, digital twin) and smart readiness indicator (SRI) frameworks. Evidence shows ventilation energy savings of up to 60% without degrading IAQ when control is well-designed, but also possible overconsumption when poorly parameterized or contextualized. Performance uncertainty is strongly influenced by occupant emissions and pollutant sources (bioeffluents, formaldehyde, PM2.5). The integration of predictive control, scalable IoT networks, and robust energy recovery, together with life-cycle evaluation and uncertainty analysis, enables more reliable IAQ-energy balances. Gaps are identified in VOC exposure under DCV, robustness to sensor failures, generalization of ML/RL models, and standardization of ventilation effectiveness metrics in natural/mixed modes. Full article
(This article belongs to the Special Issue Indoor Air Quality and Ventilation in the Era of Smart Buildings)
Show Figures

Figure 1

45 pages, 19583 KB  
Article
A Climate-Informed Scenario Generation Method for Stochastic Planning of Hybrid Hydro–Wind–Solar Power Systems in Data-Scarce Regions
by Pu Guo, Xiong Cheng, Wei Min, Xiaotao Zeng and Jingwen Sun
Energies 2026, 19(1), 74; https://doi.org/10.3390/en19010074 - 23 Dec 2025
Viewed by 171
Abstract
The high penetration rate of renewable energy poses significant challenges to the planning and operation of power systems in regions with scarce data. In these regions, it is impossible to accurately simulate the complex nonlinear dependencies among hydro–wind–solar energy resources, which leads to [...] Read more.
The high penetration rate of renewable energy poses significant challenges to the planning and operation of power systems in regions with scarce data. In these regions, it is impossible to accurately simulate the complex nonlinear dependencies among hydro–wind–solar energy resources, which leads to huge operational risks and investment uncertainties. To bridge this gap, this study proposes a new data-driven framework that embeds the natural climate cycle (24 solar terms) into a physically consistent scenario generation process, surpassing the traditional linear approach. This framework introduces the Comprehensive Similarity Distance (CSD) indicator to quantify the curve similarity of power amplitude, pattern trend, and fluctuation position, thereby improving the K-means clustering. Compared with the K-means algorithm based on the standard Euclidean distance, the accuracy of the improved clustering pattern extraction is increased by 3.8%. By embedding the natural climate cycle and employing a two-stage dimensionality reduction architecture: time compression via improved clustering and feature fusion via Kernel PCA, the framework effectively captures cross-source dependencies and preserves climatic periodicity. Finally, combined with the simplified Vine Copula model, high-fidelity joint scenarios with a normalized root mean square error (NRMSE) of less than 3% can be generated. This study provides a reliable and computationally feasible tool for stochastic optimization and reliability analysis in the planning and operation of future power systems with high renewable energy grid integration. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

30 pages, 3535 KB  
Article
PRA-Unet: Parallel Residual Attention U-Net for Real-Time Segmentation of Brain Tumors
by Ali Zakaria Lebani, Medjeded Merati and Saïd Mahmoudi
Information 2026, 17(1), 14; https://doi.org/10.3390/info17010014 - 23 Dec 2025
Viewed by 217
Abstract
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an [...] Read more.
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an optimal balance between accuracy and computational cost remains a significant challenge. In many cases, current methods trade speed for accuracy, or vice versa, consuming substantial computing power and making them difficult to use on devices with limited resources. To address this issue, we present PRA-UNet, a lightweight deep learning model optimized for fast and accurate 2D brain tumor segmentation. Using a single 2D input, the architecture processes four types of MRI scans (FLAIR, T1, T1c, and T2). The encoder uses inverted residual blocks and bottleneck residual blocks to capture features at different scales effectively. The Convolutional Block Attention Module (CBAM) and the Spatial Attention Module (SAM) improve the bridge and skip connections by refining feature maps and making it easier to detect and localize brain tumors. The decoder uses depthwise separable convolutions, which significantly reduce computational costs without degrading accuracy. The BraTS2020 dataset shows that PRA-UNet achieves a Dice score of 95.71%, an accuracy of 99.61%, and a processing speed of 60 ms per image, enabling real-time analysis. PRA-UNet outperforms other models in segmentation while requiring less computing power, suggesting it could be suitable for deployment on lightweight edge devices in clinical settings. Its speed and reliability enable radiologists to diagnose tumors quickly and accurately, enhancing practical medical applications. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

22 pages, 4777 KB  
Article
Research on Automatic Recognition and Dimensional Quantification of Surface Cracks in Tunnels Based on Deep Learning
by Zhidan Liu, Xuqing Luo, Jiaqiang Yang, Zhenhua Zhang, Fan Yang and Pengyong Miao
Modelling 2026, 7(1), 4; https://doi.org/10.3390/modelling7010004 - 23 Dec 2025
Viewed by 212
Abstract
Cracks serve as a critical indicator of tunnel structural degradation. Manual inspections are difficult to meet engineering requirements due to their time-consuming and labor-intensive nature, high subjectivity, and significant error rates, while traditional image processing methods exhibit poor performance under complex backgrounds and [...] Read more.
Cracks serve as a critical indicator of tunnel structural degradation. Manual inspections are difficult to meet engineering requirements due to their time-consuming and labor-intensive nature, high subjectivity, and significant error rates, while traditional image processing methods exhibit poor performance under complex backgrounds and irregular crack morphologies. To address these limitations, this study developed a high-quality dataset of tunnel crack images and proposed an improved lightweight semantic segmentation network, LiteSqueezeSeg, to enable precise crack identification and quantification. The model was systematically trained and optimized using a dataset comprising 10,000 high-resolution images. Experimental results demonstrate that the proposed model achieves an overall accuracy of 95.15% in crack detection. Validation on real-world tunnel surface images indicates that the method effectively suppresses background noise interference and enables high-precision quantification of crack length, average width, and maximum width, with all relative errors maintained within 5%. Furthermore, an integrated intelligent detection system was developed based on the MATLAB (R2023b) platform, facilitating automated crack feature extraction and standardized defect grading. This system supports routine tunnel maintenance and safety assessment, substantially enhancing both inspection efficiency and evaluation accuracy. Through synergistic innovations in lightweight network architecture, accurate quantitative analysis, and standardized assessment protocols, this research establishes a comprehensive technical framework for tunnel crack detection and structural health evaluation, offering an efficient and reliable intelligent solution for tunnel condition monitoring. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Modelling)
Show Figures

Figure 1

16 pages, 341 KB  
Article
xScore: A Simple Metric for Cross-Domain Robustness in Lightweight Vision Models
by Weidong Zhang, Pak Lun Kevin Ding, Baoxin Li and Huan Liu
Algorithms 2026, 19(1), 14; https://doi.org/10.3390/a19010014 - 23 Dec 2025
Viewed by 330
Abstract
Lightweight vision models are widely deployed in mobile and embedded systems, where strict computational and memory budgets demand compact architectures. However, their evaluation remains dominated by ImageNet—a single, large natural-image dataset that requires substantial training resources. This creates a dilemma: lightweight models trained [...] Read more.
Lightweight vision models are widely deployed in mobile and embedded systems, where strict computational and memory budgets demand compact architectures. However, their evaluation remains dominated by ImageNet—a single, large natural-image dataset that requires substantial training resources. This creates a dilemma: lightweight models trained on ImageNet often reach capacity limits due to their constrained size, while scaling them to billions of parameters with specialized training tricks to achieve top-tier ImageNet accuracy does not guarantee proportional performance once the architectures are scaled back down to meet mobile constraints, particularly when re-evaluated on diverse data domains. These challenges raise two key questions: How should cross-dataset robustness be quantified in a simple and lightweight way, and which architectural elements consistently support generalization under tight resource constraints? To answer them, we introduce the Cross-Dataset Score (xScore), a simple metric that captures both average accuracy across domains and the stability of model rankings. Evaluating 11 representative lightweight models (2.5 M parameters) across seven datasets, we find that (1) ImageNet accuracy is a weak proxy for cross-domain performance, (2) xScore provides a simple and interpretable robustness metric, and (3) high-xScore models reveal architectural patterns linked to stronger generalization. Finally, the architectural insights and evaluation framework presented here provide practical guidance for measuring the xScore of future lightweight models. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

21 pages, 1574 KB  
Article
Turkish Telephone Conversations in Credit Risk Management: Natural Language Processing and LSTM Approach
by Emre Ridvan Muratlar, Dogan Yildiz and Erhan Ustaoglu
Appl. Sci. 2026, 16(1), 108; https://doi.org/10.3390/app16010108 - 22 Dec 2025
Viewed by 186
Abstract
This study aims to analyze text data obtained from Turkish phone calls to manage credit risk in the banking sector and predict whether customers will fulfill their payment promises. Data cleaning was identified as a critical step to improve the quality of the [...] Read more.
This study aims to analyze text data obtained from Turkish phone calls to manage credit risk in the banking sector and predict whether customers will fulfill their payment promises. Data cleaning was identified as a critical step to improve the quality of the texts, and various natural language processing (NLP) techniques were used. The model was built using a two-layer LSTM architecture, starting with a Self-Embedding layer, and achieved approximately 80% accuracy on the test data. The findings indicate that customers who break their payment promises often cite personal life issues such as health problems, family issues, financial difficulties, and religious beliefs to ensure reliability. These results demonstrate the importance of text data in the banking sector, the applicability of different embedding methods to Turkish datasets, and their advantages and disadvantages. Furthermore, the model built using data obtained from customer conversations can help predict credit risk more accurately and contribute to improving call center processes. Automating data cleaning processes and developing speech-to-text translation tools are recommended for future studies. Full article
Show Figures

Figure 1

32 pages, 4104 KB  
Review
Toward Active Distributed Fiber-Optic Sensing: A Review of Distributed Fiber-Optic Photoacoustic Non-Destructive Testing Technology
by Yuliang Wu, Xuelei Fu, Jiapu Li, Xin Gui, Jinxing Qiu and Zhengying Li
Sensors 2026, 26(1), 59; https://doi.org/10.3390/s26010059 - 21 Dec 2025
Viewed by 333
Abstract
Distributed fiber-optic photoacoustic non-destructive testing (DFP-NDT) represents a paradigm shift from passive sensing to active probing, fundamentally transforming structural health monitoring through integrated fiber-based ultrasonic generation and detection capabilities. This review systematically examines DFP-NDT’s evolution by following the technology’s natural progression from fundamental [...] Read more.
Distributed fiber-optic photoacoustic non-destructive testing (DFP-NDT) represents a paradigm shift from passive sensing to active probing, fundamentally transforming structural health monitoring through integrated fiber-based ultrasonic generation and detection capabilities. This review systematically examines DFP-NDT’s evolution by following the technology’s natural progression from fundamental principles to practical implementations. Unlike conventional approaches that require external excitation mechanisms, DFP-NDT leverages photoacoustic transducers as integrated active components where fiber-optical devices themselves generate and detect ultrasonic waves. Central to this technology are photoacoustic materials engineered to maximize conversion efficiency—from carbon nanotube-polymer composites achieving 2.74 × 10−2 conversion efficiency to innovative MXene-based systems that combine high photothermal conversion with structural protection functionality. These materials operate within sophisticated microstructural frameworks—including tilted fiber Bragg gratings, collapsed photonic crystal fibers, and functionalized polymer coatings—that enable precise control over optical-to-thermal-to-acoustic energy conversion. Six primary distributed fiber-optic photoacoustic transducer array (DFOPTA) methodologies have been developed to transform single-point transducers into multiplexed systems, with low-frequency variants significantly extending penetration capability while maintaining high spatial resolution. Recent advances in imaging algorithms have particular emphasis on techniques specifically adapted for distributed photoacoustic data, including innovative computational frameworks that overcome traditional algorithmic limitations through sophisticated statistical modeling. Documented applications demonstrate DFP-NDT’s exceptional versatility across structural monitoring scenarios, achieving impressive performance metrics including 90 × 54 cm2 coverage areas, sub-millimeter resolution, and robust operation under complex multimodal interference conditions. Despite these advances, key challenges remain in scaling multiplexing density, expanding operational robustness for extreme environments, and developing algorithms specifically optimized for simultaneous multi-source excitation. This review establishes a clear roadmap for future development where enhanced multiplexed architectures, domain-specific material innovations, and purpose-built computational frameworks will transition DFP-NDT from promising laboratory demonstrations to deployable industrial solutions for comprehensive structural integrity assessment. Full article
(This article belongs to the Special Issue FBG and UWFBG Sensing Technology)
Show Figures

Figure 1

Back to TopTop