Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,394)

Search Parameters:
Keywords = building size

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 539 KB  
Article
Hybrid Blended WiFi Fingerprint Indoor Localization Using Multi-Task Learning and Feature-Space WKNN
by Yujie Li and Sang-Chul Kim
Appl. Sci. 2026, 16(9), 4184; https://doi.org/10.3390/app16094184 - 24 Apr 2026
Abstract
WiFi fingerprinting remains attractive for indoor localization because it reuses existing wireless infrastructure, yet RSSI fingerprints are high-dimensional, sparse, and often ambiguous across adjacent floors and building regions. This study develops a hybrid blended localization framework that combines multi-task learning with feature-space weighted [...] Read more.
WiFi fingerprinting remains attractive for indoor localization because it reuses existing wireless infrastructure, yet RSSI fingerprints are high-dimensional, sparse, and often ambiguous across adjacent floors and building regions. This study develops a hybrid blended localization framework that combines multi-task learning with feature-space weighted k-nearest-neighbor refinement. A shared neural encoder predicts building labels, floor labels, and normalized coordinates from 520-dimensional WiFi fingerprints, and the learned embedding space is then used for semantically constrained WKNN correction. The final model is trained with AdamW, a learning rate of 8×104, batch size 512, and a joint loss over building classification, floor classification, and coordinate regression, without a learning-rate scheduler. Experiments on a public WiFi fingerprint dataset show that the hybrid model achieves the strongest overall localization robustness among the evaluated non-ensemble methods. On the official validation split, it obtains a mean localization error of 9.01, a median error of 6.25, and an RMSE of 12.95 in the dataset coordinate units. On the internal semantic validation split, it reaches 94.81% floor classification accuracy and 97.62% building classification accuracy. Floor-wise and building–floor analyses further show that the largest errors are concentrated in a small number of difficult semantic regions, especially the highest floor and sparsely constrained partitions. Full article
17 pages, 7069 KB  
Article
Optical and Thermal Control of Pore Architecture in Collagen Hydrogels for Vascular-like Tissue Engineering Scaffolds
by Mareni Arishima, Shigehisa Aoki, Sayaka Masaike and Takayuki Narita
Micro 2026, 6(2), 28; https://doi.org/10.3390/micro6020028 - 22 Apr 2026
Abstract
Vascularization remains a central challenge in thick tissue engineering. Building on our prior demonstration that carbonate buffer concentration governs multi-channel collagen gel (MCCG) architecture and perfusion culture performance, this study aimed to establish non-contact, orthogonal control of pore size and density in riboflavin-sensitized [...] Read more.
Vascularization remains a central challenge in thick tissue engineering. Building on our prior demonstration that carbonate buffer concentration governs multi-channel collagen gel (MCCG) architecture and perfusion culture performance, this study aimed to establish non-contact, orthogonal control of pore size and density in riboflavin-sensitized Type I collagen hydrogels via UV irradiation intensity and preparation temperature. UV intensity was modulated by varying the source-to-sample distance (25–52 mm); preparation temperature was set at 5, 25, or 40 °C; gelation kinetics were quantified using a vial-tilt assay. Pore area fraction ranged from 0.9% to 8.6% and Young’s modulus from 16 to 49 kPa depending on UV dose. Higher preparation temperatures accelerated gelation and produced smaller, more densely distributed pores, consistent with kinetically arrested phase separation. NIH/3T3 fibroblasts cultured on intermediate- and low-intensity UV scaffolds achieved >80% confluency by Day 7, with three-dimensional tissue-like organization and directionally aligned cellular bundles within large pores; cell metabolic activity, assessed by CCK-8 assay, remained consistently high throughout the culture period. These results demonstrate that UV irradiation intensity and preparation temperature are independently tunable, non-contact parameters for reproducible fabrication of collagen scaffolds with tunable vascular-like pore networks, complementing and extending the chemical (buffer concentration) design space of MCCG-based perfusion culture systems. Full article
Show Figures

Figure 1

47 pages, 7599 KB  
Systematic Review
Educational Measurement with Emerging Technologies: A Systematic Review Through Evidentiary Lens on Granularity and Constructing Measures Theory
by Linwei Yu, Gary K. W. Wong, Bingjie Zhang and Feifei Wang
Educ. Sci. 2026, 16(4), 661; https://doi.org/10.3390/educsci16040661 - 21 Apr 2026
Viewed by 106
Abstract
Emerging technologies (ETs), such as AI and reality techniques, are reshaping educational measurement. However, existing studies remain dispersed and are rarely synthesized in ways that clarify how ETs participate in the evidentiary work of educational measurement. Guided by PRISMA 2020, we systematically reviewed [...] Read more.
Emerging technologies (ETs), such as AI and reality techniques, are reshaping educational measurement. However, existing studies remain dispersed and are rarely synthesized in ways that clarify how ETs participate in the evidentiary work of educational measurement. Guided by PRISMA 2020, we systematically reviewed 933 empirical studies published between 2016 and 2025 in formal educational settings. We coded studies by (a) grain size (micro, meso, macro), (b) Constructing Measures Theory building blocks (construct map, item design, outcome space, measurement model), and (c) ET category. Results showed a strong concentration at the micro level (88.88%) and in outcome space and measurement model work (86.80% combined), indicating that ET-enabled innovation has focused primarily on transforming performances into indicators and modeling those indicators for interpretation and decision-making. Learning analytics and educational data mining, machine learning and deep learning, and automated scoring and feedback systems were the dominant ET clusters. These findings point to an uneven development of ET-enabled educational measurement. Included studies also indicating recurring concerns about transparency, fairness, and governance are linked to the field’s main areas of ET-enabled concentration. We therefore argue for closer alignment among construct claims, evidence, modeling, and intended use, and offer implications for developers, researchers, and education practitioners. Full article
(This article belongs to the Special Issue The State of the Art and the Future of Education)
Show Figures

Figure 1

17 pages, 2015 KB  
Article
Efficient Battery State of Health Estimation Using Lightweight ML Models Based on Limited Voltage Measurements
by Mohammad Okour, Mohannad Alkhalil, Mutaz Al Fayad, Juhyun Bak, Kevin R. James, Sulaiman Mohaidat, Xiaoqi Liu, Fadi Alsaleem, Michael Hempel, Hamid Sharif-Kashani and Mahmoud Alahmad
J. Low Power Electron. Appl. 2026, 16(2), 16; https://doi.org/10.3390/jlpea16020016 - 21 Apr 2026
Viewed by 165
Abstract
Accurate estimation of lithium-ion battery State of Health (SoH) is critical for emerging applications such as reconfigurable battery systems. Although data-driven machine learning methods are promising, they often rely on costly, time-intensive aging experiments and extensive feature engineering. This work proposes a lightweight [...] Read more.
Accurate estimation of lithium-ion battery State of Health (SoH) is critical for emerging applications such as reconfigurable battery systems. Although data-driven machine learning methods are promising, they often rely on costly, time-intensive aging experiments and extensive feature engineering. This work proposes a lightweight SoH-prediction framework validated on both physics-informed synthetic aging data and the NASA battery aging dataset. We evaluated Random Forest (RF) and Feedforward Neural Network (FNN) models that use only a limited number of samples from an early segment of the raw discharge voltage curve as input. Results show that RF consistently outperforms FNN across input sizes in deterministic or noise-free environments, achieving an RMSE of 0.07% SoH using just 5 voltage samples. In inherently stochastic experimental data, however, FNN can achieve an RMSE 50% lower than RF (1.28 vs. 2.87), but requires 37× more mathematical operations per inference. These findings emphasize the predictive value of the early-discharge-voltage region and demonstrate that compact, low-feature-complexity models can deliver accurate SoH estimates. Overall, the approach supports a goal of combining informed synthetic data with limited real measurements to build robust, scalable SoH predictors, reducing dependence on labor-intensive degradation testing and feature-heavy pipelines. Full article
(This article belongs to the Special Issue 15th Anniversary of Journal of Low Power Electronics and Applications)
Show Figures

Figure 1

19 pages, 10325 KB  
Article
Study of PEG/Biochar Cementitious Cold-Bonded Aggregate for Thermal Energy Storage
by Rongji Li, Chong Zhang, Yuechao Zhao, Changliang Wu, Guangbin Duan and Xiuzhi Zhang
Nanomaterials 2026, 16(8), 492; https://doi.org/10.3390/nano16080492 - 21 Apr 2026
Viewed by 188
Abstract
The incorporation of phase change materials in concrete is a practical strategy that holds great promise for enhancing the energy efficiency of buildings and reducing CO2 emissions. However, the direct contact between phase change materials and cement interferes with the cement hydration [...] Read more.
The incorporation of phase change materials in concrete is a practical strategy that holds great promise for enhancing the energy efficiency of buildings and reducing CO2 emissions. However, the direct contact between phase change materials and cement interferes with the cement hydration reaction, leading to a significant reduction in the mechanical strength of cementitious composites. To encapsulate polyethylene glycol and prevent leakage, this study developed a shape-stabilized phase change aggregate via the cold-bonding method and the vacuum impregnation method. The nanoscale pore structure of the aggregate was regulated by adjusting the biochar content to enhance the phase-change material loading capacity. The phase change aggregate was characterized by indicators including crushing strength and water absorption. Meanwhile, its microstructure, the correlations between nano-sized hydration products, chemical compatibility, and phase change properties were analyzed. The fabricated phase change aggregate has a crushing strength of over 5 MPa, latent heat of 42.84 J/g, and phase change temperature of 29.17 °C while also exhibiting good mechanical properties and thermal energy storage performance. The compressive strength of phase change concrete can meet the strength requirements for structural building material. Moreover, phase change aggregate contributed to reduced CO2 emissions during service, with favorable economic and low-carbon benefits over its service life, demonstrating good performance in both economic efficiency and CO2 emission reduction. Full article
(This article belongs to the Special Issue Nanocomposite Modified Cement and Concrete)
Show Figures

Figure 1

19 pages, 5890 KB  
Article
Roadside Traffic Facility Facade General Obstacle Segmentation Based on Vision Language Model and Similarity Loss Function for Automatic Cleaning Vehicle
by Yanrui Guo, Degang Xu and Jiacai Liao
Appl. Sci. 2026, 16(8), 3984; https://doi.org/10.3390/app16083984 - 20 Apr 2026
Viewed by 212
Abstract
Tunnels, soundproof screens and other vertical roadside traffic facilities play an important role in isolating the driving environment, maintaining driving safety, and reducing driving noise. As the usage time increases, these facade traffic buildings become polluted and cause traffic safety problems. Obstacles on [...] Read more.
Tunnels, soundproof screens and other vertical roadside traffic facilities play an important role in isolating the driving environment, maintaining driving safety, and reducing driving noise. As the usage time increases, these facade traffic buildings become polluted and cause traffic safety problems. Obstacles on three-dimensional walls of different shapes, colors, and sizes are the most challenging problem in intelligent cleaning environment perception. This paper proposes an obstacle segmentation method based on a visual language model to overcome these problems. Firstly, in the constructed experimental environment, a visual–language obstacle dataset is collected, named the Road-side General Obstacles Dataset (RGOD), and the collected dataset is labeled with both a segmentation mask and a language description. These preprocessing results are used as the training input of the perception model to obtain the foreground and background separation results. Secondly, a VLM-GOS model was proposed to segmentation special-shaped obstacles, which emphasizes the distinction between background and foreground targets. Finally, the general obstacle is segmented by a vision–language model with a similar loss function, and evaluated with different metrics. Experimental results show that compared with models such as MaskFormer, SegFormer, and ASD-Net, this method improves the model’s perceptual ability and increases accuracy by 3%. More importantly, the model is more interpretable. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

41 pages, 2004 KB  
Article
Dielectric and Magnetic Spherical Hollow Shells Subjected to a dc or Low-Frequency ac Field of Any Spatial Form: Complete Theoretical Survey of All Scalar and Vector Physical Entities, Including the Depolarization Effect
by Petros Moraitis, Kosmas Tsakmakidis, Norbert M. Nemes and Dimosthenis Stamopoulos
Materials 2026, 19(8), 1638; https://doi.org/10.3390/ma19081638 - 19 Apr 2026
Viewed by 194
Abstract
Dielectric and magnetic spherical hollow shells are employed in many applications as standard building units. These structures are commonly subjected to size reduction to obtain a high surface area/volume ratio, a property that is in favor of specific applications. However, the size reduction [...] Read more.
Dielectric and magnetic spherical hollow shells are employed in many applications as standard building units. These structures are commonly subjected to size reduction to obtain a high surface area/volume ratio, a property that is in favor of specific applications. However, the size reduction enhances the importance of physical mechanisms that originate from surfaces, such as the depolarization effect. Here we tackle the problem of dielectric and magnetic spherical hollow shells, consisting of a linear, homogeneous and isotropic parent material, subjected to an external potential, Uextr, of any spatial form (either dc (static) or ac of low-frequency (quasistatic limit)). By applying the method-of-linear-recursive-solution (MLRS) to the Laplace equation, we calculate analytically the internal, Uintr, and total, Utotr, potentials in respect to the external one, Uextr. From Uintr and Utotr we calculate all relevant scalar and vector physical entities of interest. The MLRS unveils straightforwardly the existence of two distinct depolarization factors, Nl=l/(2l+1) and Nl+1=(l+1)/(2l+1), both depending on the degree, l, however not on the order, m, of the mode of the external potential, Uext(l,m)r. These depolarization factors, Nl and Nl+1, originate from the outer, r=b, and inner, r=a, surfaces and are accompanied by two extrinsic susceptibilities, χe,lext=χe /(1+Nlχe ) and χe,l+1ext=χe /(1+Nl+1χe ), respectively. Importantly, Nl+Nl+1=1, irrespective of the degree, l, as it should. The properties of spherical hollow shells are investigated through analytical modeling and detailed simulations, with emphasis on application-relevant scenarios including resonance phenomena in scattering, quantitative materials characterization, and shielding/distortion. The generic MLRS strategy provides a flexible and reliable route for analyzing depolarization processes in other dielectric and magnetic building-unit geometries encountered in practice. Full article
(This article belongs to the Section Materials Physics)
30 pages, 2635 KB  
Article
A Study of Circular Economy Practices in KSA’s Small and Medium Industries: Benefits, Challenges, and Future Potential
by Houcine Benlaria, Naeimah Fahad S. Almawishir, Hisham Mohamed Misbah, Tarig Osman Abdallah Helal, Taha khairy taha Ibrahim, Ahmed Benlaria, Mohamed Djafar Henni and Rania Alaa Eldin Ahmed Khedr
Sustainability 2026, 18(8), 4059; https://doi.org/10.3390/su18084059 - 19 Apr 2026
Viewed by 174
Abstract
The circular economy (CE) can help businesses use resources more efficiently, but empirical evidence on CE adoption among non-European SMEs remains limited. This study examines CE practices, benefits, challenges, and future intentions in 220 Saudi Arabian SMIs. A structured survey collected data on [...] Read more.
The circular economy (CE) can help businesses use resources more efficiently, but empirical evidence on CE adoption among non-European SMEs remains limited. This study examines CE practices, benefits, challenges, and future intentions in 220 Saudi Arabian SMIs. A structured survey collected data on four CE practice domains (resource efficiency, waste management, eco-design, and reverse logistics), four benefit dimensions (economic, environmental, operational, and reputational), four challenge dimensions (financial, organizational, technical, and regulatory), and six future intention items. CE adoption was moderate (M = 3.29 on a five-point scale) and balanced across all four practice domains, with resource efficiency scoring highest (M = 3.32). Benefit scores averaged 3.46, far outpacing challenges (M = 2.78). This benefit surplus of 0.68 points (on a five-point scale) indicates that Saudi SMIs perceive CE as worthwhile and view its barriers as manageable rather than prohibitive. Together, perceived benefits and perceived challenges explained 54.3% of the variance in CE adoption (R2 = 0.543) in multiple regression analysis. Reducing perceived challenges may be a more effective lever for promoting CE adoption than amplifying perceived benefits, as challenges exerted a larger absolute standardised effect (β = −0.50) than perceived benefits (β = 0.39). Once perceptions were controlled, perceived benefits and challenges significantly predicted future CE intentions, but current CE practices did not. According to the Theory of Planned Behavior’s attitudinal pathway, firms without CE experience can develop strong forward-looking intentions if the business case is convincing and barriers are perceived as manageable. Technical and organizational barriers outweighed financial ones, indicating the need for capacity-building interventions over supplementary financing, unlike European findings. About 79% of respondents were neutral or positive about government-supported CE expansion. CE adoption did not differ significantly by firm size, geographic location, or ownership structure, suggesting that Vision 2030’s sustainability messaging has established a broad baseline of CE awareness across Saudi SMIs. Full article
(This article belongs to the Special Issue Circular Economy Solutions for a Sustainable Future)
29 pages, 2318 KB  
Article
From Cell-Specific Heuristics to Transferable Structural Search for Ramsey Graph Construction
by Sorin Liviu Jurj
Mathematics 2026, 14(8), 1367; https://doi.org/10.3390/math14081367 - 19 Apr 2026
Viewed by 126
Abstract
Recent automated search methods have improved lower bounds for several Ramsey numbers, but the strongest gains often depend on structured seeding and cell-specific heuristic discovery. This leaves open a more fundamental question: Can a useful search structure be transferred across related Ramsey cells [...] Read more.
Recent automated search methods have improved lower bounds for several Ramsey numbers, but the strongest gains often depend on structured seeding and cell-specific heuristic discovery. This leaves open a more fundamental question: Can a useful search structure be transferred across related Ramsey cells rather than rediscovered independently for each target instance? This work proposes a teacher–student framework for transferable structural search in Ramsey graph construction, inspired by the structure-distillation logic of Physics Structure-Informed Neural Networks (Ψ-NNs). The framework builds compressed structural representations from teacher witnesses and search traces, extracts reusable motifs and relations, and reconstructs transfer candidates. These are refined by balanced search and, for weak R(3, s) cells, by exact small-cell supervision. The framework is evaluated as a proof of concept across five Ramsey cells under transfer, matched-compute, search, ablation, and interpretability settings, including a proportional shift-scaling baseline and a greedy triangle-closing baseline that probe the structure-validity frontier from complementary directions. Supplementary experiments cover seed robustness, budget sensitivity, transfer-neighborhood variation, structural-resolution changes, stronger exact supervision, cross-r teacher pooling, single-teacher configurations, and scaling behavior across graph sizes. The results show that the portfolio version of the framework is the strongest balanced transfer method in the current study, while a structure-dominant oracle achieves stronger witness-shape agreement but worse Ramsey-valid construction. These findings reveal a clear structure-validity frontier and suggest that transferable Ramsey search should be evaluated by how well structural priors survive the validity constraints of new cells. Full article
(This article belongs to the Special Issue Advances in Graph Labelings and Ramsey Theory in Discrete Structures)
Show Figures

Figure 1

22 pages, 1000 KB  
Article
MP-RAS: A Multi-Parallel Restricted Additive Schwarz Preconditioner for Cardiac Electrophysiology Simulation
by Hao Wu, Zhiyang Li, Jiayu Shi, Feng Yu and Bo Zhang
Appl. Sci. 2026, 16(8), 3956; https://doi.org/10.3390/app16083956 - 19 Apr 2026
Viewed by 115
Abstract
In this paper, we present a multi-parallel restricted additive Schwarz (MP-RAS) preconditioner construction method for cardiac electrophysiology simulation. This method is designed to address the need for solving large-scale linear systems in realistic cardiac electrophysiology simulations and can provide a more efficient computational [...] Read more.
In this paper, we present a multi-parallel restricted additive Schwarz (MP-RAS) preconditioner construction method for cardiac electrophysiology simulation. This method is designed to address the need for solving large-scale linear systems in realistic cardiac electrophysiology simulations and can provide a more efficient computational tool for patient-specific electrical propagation modeling, arrhythmia studies, and the evaluation of ablation strategies. The proposed preconditioner is suitable for the finite element simulation of the anisotropic cardiac monodomain model. In particular, we construct the subdomains based on Morton code sorting, build submatrices by indices and decompose the formula for parallel computing. Given that the computing of each subdomain is relatively independent, the iteration can be extended to N-parallel. Numerical experiments indicate that for matrices of the same size and under the same number of partitions, Morton code sorting is at least 105 times faster than METIS, while the memory usages are reduced by 12∼32%. The iteration number is reduced by approximately two times compared with the Jacobi and block Jacobi preconditioned conjugate gradient (PCG) method. Comparative experiments with other solvers further demonstrate that the MP-RAS solver is highly efficient for solving this parabolic partial differential equation and have strong parallel scalability. Full article
Show Figures

Figure 1

22 pages, 957 KB  
Article
Strategic Capacity Planning Algorithm for Last-Mile Delivery Under High-Volume Demand Surges
by Didar Yedilkhan, Aidarbek Shalakhmetov, Bakbergen Mendaliyev and Nursultan Khaimuldin
Algorithms 2026, 19(4), 319; https://doi.org/10.3390/a19040319 - 18 Apr 2026
Viewed by 130
Abstract
Last-mile delivery companies can face demand surges where large-volume order requests exceed daily courier capacity. In such cases fast and robust feasibility-first planning becomes more practical and valuable than building optimal routes. This paper proposes a hierarchical, computationally feasible decomposition pipeline that produces [...] Read more.
Last-mile delivery companies can face demand surges where large-volume order requests exceed daily courier capacity. In such cases fast and robust feasibility-first planning becomes more practical and valuable than building optimal routes. This paper proposes a hierarchical, computationally feasible decomposition pipeline that produces shift-feasible clusters under a strict shift-duration limit using travel-time-based duration estimates. While decomposition methods for large-scale VRPs are well established, they typically remain oriented toward route-construction quality within a single operational day or toward balancing customer counts, demand, or Euclidean territory partitions. In contrast, the proposed method targets a different decision problem: rapid feasibility-first strategic capacity planning for one-time extreme demand surges, where the primary requirement is to estimate, within seconds, a conservative upper bound on the number of courier shifts under a strict shift-duration limit. When end-to-end latency is evaluated from raw geographic points, including distance-matrix preparation for monolithic baselines, the proposed pipeline becomes 187 to 1315 times faster than matrix-based monolithic optimization on the common benchmark sizes. Methodologically, the contribution lies in combining (i) topology-preserving spatial linearization with a Hilbert Space-Filling Curve, (ii) adaptive greedy microclustering driven by empirical travel-time quantiles, and (iii) lexicographic dynamic-programming merge that minimizes the number of shifts first and total travel time second. This yields a planning-oriented decomposition mechanism that is distinct from classical route-quality-centered hierarchical VRP approaches. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

28 pages, 1062 KB  
Article
Predicting Enterprise AI Adoption in Europe from Cloud Sophistication, Digital Sales Capabilities, and Enterprise Size
by Cristiana Tudor
Algorithms 2026, 19(4), 316; https://doi.org/10.3390/a19040316 - 17 Apr 2026
Viewed by 192
Abstract
This paper examines whether broad enterprise AI adoption in Europe is best understood as an isolated technology decision or as the outcome of a wider bundle of digital capabilities. Using harmonized Eurostat data for European enterprises, the analysis builds a repeated cross-section at [...] Read more.
This paper examines whether broad enterprise AI adoption in Europe is best understood as an isolated technology decision or as the outcome of a wider bundle of digital capabilities. Using harmonized Eurostat data for European enterprises, the analysis builds a repeated cross-section at the country–size-class–year level and models high AI adoption with a combination of random forest and elastic-net estimation. The dependent variable captures enterprises using at least one AI technology, while the explanatory set focuses on cloud adoption, cloud CRM, cloud ERP, cloud database hosting, cloud security, cloud software use, e-sales intensity, and enterprise size. The findings reveal a stable predictive structure and consistent classification performance across specifications. Across models, cloud CRM and e-sales emerge as the strongest predictors of high AI adoption, followed by general cloud use and selected data-related cloud capabilities. This ordering remains largely stable in threshold-sensitivity checks based on alternative definitions of high adoption. The pattern also remains visible when country controls are removed, which suggests that the result is not merely a reflection of national heterogeneity. The paper contributes by shifting attention from broad claims about “digital readiness” to a narrower and more operational notion of capability complementarity: AI uptake tends to cluster where firms already possess customer-facing, cloud-based, and commercially digital infrastructures. In that sense, the paper offers a transparent, reproducible, and policy-relevant account of the digital foundations of enterprise AI adoption in Europe. Full article
(This article belongs to the Special Issue AI-Driven Business Analytics Revolution)
Show Figures

Figure 1

21 pages, 9775 KB  
Article
Microstructural Stability of 316 L Produced by Additive Manufacturing for Nuclear Applications
by Roberto Montanari, Alessandra Palombi, Maria Richetta, Giulia Stornelli, Alessandra Varone and Ali Zahid
Materials 2026, 19(8), 1610; https://doi.org/10.3390/ma19081610 - 17 Apr 2026
Viewed by 259
Abstract
Additive manufacturing (AM) represents a quite interesting technology for manufacturing components of nuclear reactors. This work investigated the microstructural stability of 316 L steel fabricated via Laser Powder Bed Fusion (L-PBF) from room temperature to 650 °C. Despite the reduced susceptibility of the [...] Read more.
Additive manufacturing (AM) represents a quite interesting technology for manufacturing components of nuclear reactors. This work investigated the microstructural stability of 316 L steel fabricated via Laser Powder Bed Fusion (L-PBF) from room temperature to 650 °C. Despite the reduced susceptibility of the material to sensitization owing to its low carbon content, temperature variations may induce deleterious effects in nuclear safety-critical components. In as-printed condition, the microstructure is not stable and undergoes significant changes induced by thermal cycling up to 650 °C in Mechanical Spectroscopy (MS) tests: the typical melt-pool pattern disappears, a population of equiaxed grains substitutes the original ones elongated in the build direction, the average size of the cells forming a finer sub-structure inside the grains increases, texture changes, and the excess of vacancies induced by the rapid cooling is recovered. Although the current literature reports that the microstructure is stable up to 500 °C, MS results indicate that the aforesaid irreversible phenomena start at a lower temperature (~230 °C). The present results suggest that the microstructure of the printed material must be stabilized through suitable heat treatments before its application in structural components for nuclear reactors. Full article
Show Figures

Figure 1

24 pages, 7609 KB  
Article
CGHD: Dual-Temporal Dataset of Composite Geological Hazards via Multi-Source Optical Remote Sensing Images
by Yuebao Wang, Guang Yang, Xiaotong Guo, Wangze Lu, Rongxiang Liu, Meng Huang and Shuai Liu
Remote Sens. 2026, 18(8), 1198; https://doi.org/10.3390/rs18081198 - 16 Apr 2026
Viewed by 291
Abstract
Geological hazards are characterized by their sudden occurrence, high destructiveness, and wide spatial impact. In particular, landslides and debris flows triggered by earthquakes and intense rainfall often lead to severe casualties and substantial property losses. Therefore, the rapid delineation of affected areas is [...] Read more.
Geological hazards are characterized by their sudden occurrence, high destructiveness, and wide spatial impact. In particular, landslides and debris flows triggered by earthquakes and intense rainfall often lead to severe casualties and substantial property losses. Therefore, the rapid delineation of affected areas is crucial for disaster assessment and post-disaster reconstruction. To this end, several geohazard datasets have been developed from remote sensing imagery, focusing on specific regions, disaster types, and data sources, providing valuable support for geohazard detection and risk assessment. Our study addresses the diversity of real-world geological disasters in terms of their types, causes, and spatial distribution and constructs the Composite Geological Hazards Dataset (CGHD), a dual-temporal geohazard dataset that enhances generalisation and practical applicability. CGHD incorporates pre- and post-disaster remote sensing images of 14 landslide and debris flow events that occurred worldwide between 2017 and 2024, collected using four remote sensing platforms and encompassing multiple spatial scales and land-cover categories. The affected areas varied significantly in size and shape, with land-cover types including roads, buildings, vegetation, farmland, and water bodies. This resulted in 3963 pairs of pre- and post-disaster images, each with a size of 1024 × 1024 pixels. We validated the reliability of the CGHD through experiments with nine change-detection models and further evaluated its generalisation capability using an unseen dataset. The experimental results demonstrate that CGHD achieves high recognition accuracy and strong generalisation across diverse geographic environments, providing comprehensive data support for intelligent geohazard recognition and disaster assessment. Full article
Show Figures

Figure 1

22 pages, 4742 KB  
Article
A Novel E-Nose Architecture Based on Virtual Sensor-Augmented Embedded Intelligence for a Real-Time In-Vehicle Carbon Monoxide Concentration Estimation System
by Dharmendra Kumar, Anup Kumar Rabha, Ashutosh Mishra, Rakesh Shrestha and Navin Singh Rajput
Electronics 2026, 15(8), 1671; https://doi.org/10.3390/electronics15081671 - 16 Apr 2026
Viewed by 274
Abstract
The increasing risk of air pollution in closed areas like passenger vehicles requires smart and real-time air quality reading solutions. Gases such as carbon monoxide (CO)—which is colorless and odorless and is produced by exhaust systems—air conditioners, and combustion sources are very dangerous [...] Read more.
The increasing risk of air pollution in closed areas like passenger vehicles requires smart and real-time air quality reading solutions. Gases such as carbon monoxide (CO)—which is colorless and odorless and is produced by exhaust systems—air conditioners, and combustion sources are very dangerous to health because they can cause respiratory distress and poisoning at high levels. Traditional in-vehicle CO monitoring systems use a single-point sensor and a fixed threshold, which are insufficient in a dynamic cabin environment subject to factors such as vehicle size, ventilation rate, number of occupants, and incoming traffic. To address these drawbacks, this paper proposes a new E-Nose system with Virtual Sensor-Augmented Embedded Intelligence to estimate the CO concentration in vehicle cabins in real time. The system combines data from cheap gas sensors and improves it using virtual sensor machine learning models trained to predict or enhance sensor responses in real time. Embedded intelligence, deployed locally on edge hardware, supports low-latency processing, dynamic calibration, and noise filtering to respond to fluctuating environmental conditions adaptively. This architecture enables more accurate, robust, and context-aware estimation of CO levels compared to traditional threshold-based methods. Experimental validation across varied vehicular scenarios demonstrates superior precision and responsiveness, providing timely warnings even under complex dispersion patterns. Classifier Gradient Boosting, which builds an ensemble of weak learners sequentially, matched the Random Forest with 99.94% training and 98.59% model accuracy, confirming its strong predictive capability. The system is designed to be cost-effective, scalable, and easily integrable into modern automotive platforms. This study also contributes to the field of smart ecological recording and demonstrates the effectiveness of the virtual sensor-enhanced embedded system as an effective way to improve passenger safety by providing pre-emptive on-board air quality monitoring. Full article
(This article belongs to the Special Issue Emerging IoT Sensor Network Technologies and Applications)
Show Figures

Figure 1

Back to TopTop