Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (379)

Search Parameters:
Keywords = very large-scale integration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 16226 KB  
Article
Liquefaction Hazard Assessment and Mapping Across the Korean Peninsula Using Amplified Liquefaction Potential Index
by Woo-Hyun Baek and Jae-Soon Choi
Appl. Sci. 2026, 16(2), 612; https://doi.org/10.3390/app16020612 - 7 Jan 2026
Viewed by 105
Abstract
Liquefaction is a critical mechanism amplifying earthquake-induced damage, necessitating systematic hazard assessment through spatially distributed mapping. This study presents a nationwide liquefaction hazard assessment framework for South Korea, integrating site classification, liquefaction potential index (LPI) computation, and probabilistic damage evaluation. Sites across the [...] Read more.
Liquefaction is a critical mechanism amplifying earthquake-induced damage, necessitating systematic hazard assessment through spatially distributed mapping. This study presents a nationwide liquefaction hazard assessment framework for South Korea, integrating site classification, liquefaction potential index (LPI) computation, and probabilistic damage evaluation. Sites across the Korean Peninsula were stratified into five geotechnical categories (S1–S5) based on soil characteristics. LPI values were computed incorporating site-specific amplification coefficients for nine bedrock acceleration levels corresponding to seismic recurrence intervals of 500, 1000, 2400, and 4800 years per Korean seismic design specifications. Subsurface characterization utilized standard penetration test (SPT) data from 121,821 boreholes, with an R-based analytical program enabling statistical processing and spatial visualization. Damage probability assessment employed Iwasaki’s LPI severity classification across site categories. Results indicate that at 0.10 g peak ground acceleration (500-year event), four regions exhibit severe liquefaction susceptibility. This geographic footprint expands to seven regions at 0.14 g (1000-year event) and eight regions at 0.18 g. For the 2400-year design basis earthquake (0.22 g), all eight identified high-risk zones reach critical thresholds simultaneously. Site-specific analysis reveals stark contrasts in vulnerability: S2 sites demonstrate 99% very low to low damage probability, whereas S3, S4, and S5 sites face 33%, 51%, and 99% severe damage risk, respectively. This study establishes a scalable, evidence-based framework enabling efficient large-scale liquefaction hazard assessment for governmental risk management applications. Full article
(This article belongs to the Special Issue Soil Dynamics and Earthquake Engineering)
Show Figures

Figure 1

25 pages, 6501 KB  
Article
Automated Detection of Submerged Sandbar Crest Using Sentinel-2 Imagery
by Benjamí Calvillo, Eva Pavo-Fernández, Manel Grifoll and Vicente Gracia
Remote Sens. 2026, 18(1), 132; https://doi.org/10.3390/rs18010132 - 30 Dec 2025
Viewed by 394
Abstract
Coastal sandbars play a crucial role in shoreline protection, yet monitoring their dynamics remains challenging due to the cost and limited temporal coverage of traditional surveys. This study assesses the feasibility of using Sentinel-2 multispectral imagery combined with the logarithmic band ratio method [...] Read more.
Coastal sandbars play a crucial role in shoreline protection, yet monitoring their dynamics remains challenging due to the cost and limited temporal coverage of traditional surveys. This study assesses the feasibility of using Sentinel-2 multispectral imagery combined with the logarithmic band ratio method to automatically detect submerged sandbar crests along three morphologically distinct beaches on the northwestern Mediterranean coast. Pseudo-bathymetry was derived from log-transformed band ratios of blue-green and blue-red reflectance used to extract the sandbar crest and validated against high-resolution in situ bathymetry. The blue-green band ratio achieved higher accuracy than the blue-red band ratio, which performed slightly better in very shallow waters. Its application across single, single/double, and double shore-parallel bar systems demonstrated the robustness and transferability of the approach. However, the method requires relatively clear or calm water conditions, and breaking-wave foam, sunglint, or cloud cover conditions limit the number of usable satellite images. A temporal analysis at a dissipative beach further revealed coherent bar migration patterns associated with storm events, consistent with observed hydrodynamic forcing. The proposed method is cost-free, computationally efficient, and broadly applicable for large-scale and long-term sandbar monitoring where optical water clarity permits. Its simplicity enables integration into coastal management frameworks, supporting sediment-budget assessment and resilience evaluation in data-limited regions. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

35 pages, 2605 KB  
Systematic Review
Blockchain and Data Management Security for Sustainable Digital Ecosystems: A Systematic Literature Review
by Javier Gamboa-Cruzado, Victor Pineda-Delacruz, Humberto Salcedo-Mera, Cristina Alzamora Rivero, José Coveñas Lalupu and Manuel Narro-Andrade
Sustainability 2026, 18(1), 185; https://doi.org/10.3390/su18010185 - 24 Dec 2025
Viewed by 455
Abstract
Blockchain has been widely proposed to strengthen data management security through decentralization, immutability, and auditable transactions, capabilities increasingly recognized as enablers of sustainable digital ecosystems and resilient institutions; however, existing studies remain dispersed across domains and rarely consolidate governance, interoperability, and evaluation criteria. [...] Read more.
Blockchain has been widely proposed to strengthen data management security through decentralization, immutability, and auditable transactions, capabilities increasingly recognized as enablers of sustainable digital ecosystems and resilient institutions; however, existing studies remain dispersed across domains and rarely consolidate governance, interoperability, and evaluation criteria. This paper conducts a systematic literature review of 70 peer-reviewed studies published between 2018 and 2024, using IEEE Xplore, Scopus, Springer, ScienceDirect, and ACM Digital Library as primary sources and following Kitchenham’s guidelines and the PRISMA 2020 flow, to examine how blockchain has been applied to secure data in healthcare, IoT, smart cities, supply chains, and cloud environments. The analysis identifies four methodological streams—empirical implementations, cryptographic/security protocols, blockchain–machine learning integrations, and conceptual frameworks—and shows that most contributions are technology-driven, with limited attention to standard metrics, regulatory compliance, and cross-platform integration. In addition, the review reveals that very few works articulate governance models that align technical solutions with organizational policies, which creates a gap for institutions seeking trustworthy, auditable, and privacy-preserving deployments. The review contributes a structured mapping of effectiveness criteria (confidentiality, auditability, availability, and compliance) and highlights the need for governance models and interoperable architectures to move from prototypes to production systems. Future work should prioritize large-scale validations, policy-aligned blockchain solutions, and comparative evaluations across sectors. Full article
(This article belongs to the Special Issue Remote Sensing for Sustainable Environmental Ecology)
Show Figures

Figure 1

23 pages, 2630 KB  
Article
RMLP-Cap: An End-to-End Parasitic Capacitance Extraction Flow Based on ResMLP
by Xinya Zhou, Jiacheng Zhang, Bin Li, Wenchao Liu, Zhaohui Wu and Bing Lu
Electronics 2026, 15(1), 36; https://doi.org/10.3390/electronics15010036 - 22 Dec 2025
Viewed by 167
Abstract
With continued transistor scaling and increasing interconnect density in very large-scale integration (VLSI) circuits, the parasitic capacitance of interconnect has become a major contributor to circuit delay and signal integrity degradation. Fast and accurate parasitic capacitance extraction is therefore essential in the back-end-of-line [...] Read more.
With continued transistor scaling and increasing interconnect density in very large-scale integration (VLSI) circuits, the parasitic capacitance of interconnect has become a major contributor to circuit delay and signal integrity degradation. Fast and accurate parasitic capacitance extraction is therefore essential in the back-end-of-line (BEOL) stage. Currently, 2.5D parasitic capacitance extraction flow based on the pattern matching method is widely used by commercial tools, which still suffer from lengthy pattern library construction, cross-section preprocessing, pattern mismatch, and poor accuracy for small capacitance extraction. To overcome these limitations, this work proposes an end-to-end parasitic capacitance extraction workflow, named residual multilayer perceptron interconnect parasitic capacitance extraction (RMLP-Cap), which leverages a residual multilayer perceptron (ResMLP) to enhance traditional workflow. RMLP-Cap integrates parasitic extraction (PEX) window acquisition, pattern definition, feature extraction, dataset generation, ResMLP model training, and capacitance aggregation into a unified flow. Experimental results show that RMLP-Cap can automatically define and model complex 2D patterns with 100% matching accuracy. Compared with a field solver based on the boundary element method (BEM), the ResMLP model achieves an average relative error below 0.9%, a standard deviation under 0.2%, and less than 0.5% error for small capacitances, while providing a 900% speed improvement for extraction speed. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

18 pages, 1828 KB  
Article
Analyzing Eutrophication Conditions in the Gulf of Mexico Using the SIMAR Integral Marine Water Quality Index (ICAM-SIMAR-Integral)
by Hansel Caballero-Aragón, Eduardo Santamaría-del-Ángel, Sergio Cerdeira-Estrada, Raúl Martell-Dubois, Laura Rosique-de-la-Cruz and Jaime Valdez-Chavarin
Sustainability 2025, 17(24), 11354; https://doi.org/10.3390/su172411354 - 18 Dec 2025
Viewed by 224
Abstract
The ocean is a priority for governments and international organizations. Large-scale, in situ ocean water quality monitoring programs are not very feasible due to the high costs associated with their implementation and operation. In this work, we present a tool for assessing ocean [...] Read more.
The ocean is a priority for governments and international organizations. Large-scale, in situ ocean water quality monitoring programs are not very feasible due to the high costs associated with their implementation and operation. In this work, we present a tool for assessing ocean conditions, the SIMAR Integrated Marine Water Quality Index (ICAM-SIMAR-Integral), composed of two satellite parameters and three numerical models. We evaluated its spatiotemporal variability at 10 sites in the Gulf of Mexico, which have dissimilar environmental conditions. We validated its use by comparing it with the TRIX trophic index at 41 sites. To construct the index, the five parameters were standardized using a logarithmic equation and then summed, weighted according to their relationship with eutrophication. An index with a scale of 1 to 100 was obtained, divided into five classification intervals: oligotrophic, mesotrophic, eutrophic, supertrophic, and hypertrophic. The median values of the index and its parameters exhibited significant spatial and temporal variability, consistent with the literature’s criteria regarding their values and eutrophication thresholds. Comparison with TRIX showed no significant differences, validating the implementation of ICAM-SIMAR-Integral as an easily interpreted early warning system for managers and decision-makers in conservation matters. This index will allow for continuous, large-scale monitoring of the ocean, thereby contributing synoptically to its sustainable use. Full article
Show Figures

Figure 1

21 pages, 667 KB  
Article
CSF: Fixed-Outline Floorplanning Based on the Conjugate Subgradient Algorithm and Assisted by Q-Learning
by Xinyan Meng, Huabin Cheng, Yu Chen, Jianguo Hu and Ning Xu
Electronics 2025, 14(24), 4893; https://doi.org/10.3390/electronics14244893 - 12 Dec 2025
Viewed by 215
Abstract
Analytical floorplanning algorithms are prone to local convergence and struggle to generate high-quality results; therefore, this paper proposes a nonsmooth analytical placement model and develops a Q-learning-assisted conjugate subgradient algorithm (CSAQ) for efficient floorplanning that addresses these issues. By integrating a population-based strategy [...] Read more.
Analytical floorplanning algorithms are prone to local convergence and struggle to generate high-quality results; therefore, this paper proposes a nonsmooth analytical placement model and develops a Q-learning-assisted conjugate subgradient algorithm (CSAQ) for efficient floorplanning that addresses these issues. By integrating a population-based strategy and an adaptive step size adjustment driven by Q-learning, the CSAQ strikes a balance between exploration and exploitation to avoid suboptimal solutions in fixed-outline floorplanning scenarios. Experimental results on the MCNC and GSRC benchmarks demonstrate that the proposed CSAQ not only effectively solves global placement planning problems but also significantly outperforms existing constraint graph-based legalization methods, as well as the improved variants, in terms of the efficiency of generating legal floorplans. For hard module-only placement scenarios, it exhibits competitive performance compared to the state-of-the-art algorithms. Full article
Show Figures

Figure 1

20 pages, 9502 KB  
Article
Meta-Path-Based Probabilistic Soft Logic for Drug–Target Interaction Predictions
by Shengming Zhang and Yizhou Sun
Mathematics 2025, 13(24), 3958; https://doi.org/10.3390/math13243958 - 12 Dec 2025
Viewed by 322
Abstract
Drug–target interaction (DTI) predictions, which aim to predict whether a drug will be bounded to a target, have received wide attention recently. The goal is to automate and accelerate the costly process of drug design. Most of the recently proposed methods use single [...] Read more.
Drug–target interaction (DTI) predictions, which aim to predict whether a drug will be bounded to a target, have received wide attention recently. The goal is to automate and accelerate the costly process of drug design. Most of the recently proposed methods use single drug–drug similarity and target–target similarity information for DTI predictions; thus, they are unable to take advantage of the abundant information regarding the various types of similarities between these two types of information. Very recently, some methods have been proposed to leverage multi-similarity information; however, they still lack the ability to take into consideration the rich topological information of all sorts of knowledge bases in which the drugs and targets reside. Furthermore, the high computational cost of these approaches limits their scalability to large-scale networks. To address these challenges, we propose a novel approach named summated meta-path-based probabilistic soft logic (SMPSL). Unlike the original PSL framework, which often overlooks the quantitative path frequency, SMPSL explicitly captures crucial meta-path count information. By integrating summated meta-path counts into the PSL framework, our method not only significantly reduces the computational overhead, but also effectively models the heterogeneity of the network for robust DTI predictions. We evaluated SMPSL against five robust baselines on three public datasets. The experimental results demonstrate that our approach outperformed all of the baselines in terms of the AUPR and AUC scores. Full article
Show Figures

Figure 1

21 pages, 4961 KB  
Article
Toward a Correlative Metrology Approach on the Same 2D Flake: Graphene Oxide Case Study—Sample Preparation and Stability Issues
by Lydia Chibane, Alexandra Delvallée, Nolwenn Fleurence, Sarah Douri, José Morán-Meza, Christian Ulysse, François Piquemal, Nicolas Feltin and Emmanuel Flahaut
Nanomaterials 2025, 15(24), 1861; https://doi.org/10.3390/nano15241861 - 11 Dec 2025
Viewed by 356
Abstract
Although graphene promises a wide range of applications, large-scale production of this material remains complex. One very common way of obtaining graphene is through a reduction in graphene oxide (GO). In order to fully control this production process, it is necessary to obtain [...] Read more.
Although graphene promises a wide range of applications, large-scale production of this material remains complex. One very common way of obtaining graphene is through a reduction in graphene oxide (GO). In order to fully control this production process, it is necessary to obtain data from different techniques, but a comprehensive characterization methodology and associated metrology are currently lacking. Here, we propose tools for substrate selection (in this study, the most appropriate were silicon and silicon dioxide on silicon) and precautions to be taken when setting up a correlative metrology method integrating atomic force microscopy (AFM), scanning electron microscopy (SEM), Raman microscopy/spectroscopy, scanning microwave microscopy (SMM) and scanning thermal microscopy (SThM). Indeed, in order to obtain reliable data for each of these techniques applied to a unique graphene oxide flake, a strategy must be developed and could be implemented to monitor the reduction in GO. Emphasis was placed on the choice of the substrate and on the possible degradations generated by each of the techniques employed, and a running sequence was determined. Full article
(This article belongs to the Special Issue A Sustainable Future Using 2D and 1D Nanomaterials and Nanotechnology)
Show Figures

Figure 1

16 pages, 1604 KB  
Article
Microhardness and Coalification Parameters as Sensitive Indicators of Tectonic Deformation in Coal Seams: A Case Study
by Katarzyna Godyń
Appl. Sci. 2025, 15(24), 12972; https://doi.org/10.3390/app152412972 - 9 Dec 2025
Viewed by 244
Abstract
The formation of hard coal seams is the outcome of multi-stage, complex transformations of organic matter that lead to an increase in carbon content, a decrease in volatile components, and a progressive evolution of the rock’s structure and texture. Diagenetic and metamorphic processes, [...] Read more.
The formation of hard coal seams is the outcome of multi-stage, complex transformations of organic matter that lead to an increase in carbon content, a decrease in volatile components, and a progressive evolution of the rock’s structure and texture. Diagenetic and metamorphic processes, which underpin coal formation, largely determine its petrographic and geochemical characteristics, but they are not the only factors controlling the final properties of coal. An equally important role is played by the tectonic history of the region in which the coal seams occur. In this study, we carried out an integrated analysis of coal rank, based on vitrinite reflectance measurements (R0), and mechanical properties, using Vickers microhardness tests (Hv). Coal samples were collected from both sides of a fault plane within a single seam. The results show that the presence of the fault is clearly reflected in the measured parameters. Vitrinite reflectance generally increases towards the fault zone, but in the immediate vicinity of the fault, it exhibits a slight decrease. Subtle yet systematic changes are also observed in microhardness, particularly in the Hv values. The results show that vitrinite reflectance (R0) and microhardness (Hv) vary in a very similar manner—both parameters decrease as the degree of structural degradation of coal increases within the fault zone. This consistent response of R0 and Hv to local structural damage suggests that they may serve as sensitive indicators of the presence and extent of influence of small-scale tectonic dislocations. Their combined application provides additional information on the potential occurrence of a fault and on the degree of structural disturbance of coal in its vicinity. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

30 pages, 16494 KB  
Article
Proposal of Territorial and Environmental Planning Based on Groundwater Specific Vulnerability Zoning
by Valéria Vaz Alonso, Vitor Xatara Branco and Lázaro Valentim Zuquette
Environments 2025, 12(12), 480; https://doi.org/10.3390/environments12120480 - 8 Dec 2025
Viewed by 404
Abstract
The quality of groundwater is essential to sustain human and environmental activities now and in the future. However, the current intensification of anthropogenic activities has increased the magnitude of contaminant sources. When those contaminants reach a saturated zone (groundwater), their levels of presence [...] Read more.
The quality of groundwater is essential to sustain human and environmental activities now and in the future. However, the current intensification of anthropogenic activities has increased the magnitude of contaminant sources. When those contaminants reach a saturated zone (groundwater), their levels of presence may make their use for various purposes unfeasible. Therefore, research into the vulnerability degree is essential for estimations of potential for contamination and possible risks. This manuscript presents the results obtained by applying a parametric procedure for mapping groundwater vulnerability based on a set of attributes related to contaminant sources, transport, and natural attenuation of contaminants. In addition to vulnerability zoning, the set of attributes supports the adoption of measures and recommendations related to territorial and environmental planning guidelines and orientations about land uses. The open source Geographical Information System—QGIS open source version 3.22.4 was used for spatially integrating different attribute maps and obtaining partial indices for contaminant introduction, transport, and attenuation; hence, the specific vulnerability index. The results promoted the division of the region into six classes of specific vulnerability, namely, extremely high, accounting for around 23% vulnerability, very high (20%), moderate (24%), very low (23%), and high and low together accounting for 10%. Such categories were associated with measures and recommendations aimed at territorial and environmental planning and protection and control of environmental functions. Approximately 50% of the study area requires restrictive measures regarding buildings, sustainable drainage systems, waste disposal, chemical storage, and petrol stations, and other measures are necessary for the protection of wells and natural springs. The method employed can produce results that enable areas to be categorized and ranked in terms of specific vulnerability; however, it requires a large quantity of data and spatial details according to the scale adopted. The specific vulnerability map produced will help planners make more appropriate territorial and environmental planning decisions and risk management, avoiding groundwater contamination. Full article
Show Figures

Figure 1

70 pages, 16474 KB  
Article
Assessment of the Accuracy of ISRIC and ESDAC Soil Texture Data Compared to the Soil Map of Greece: A Statistical and Spatial Approach to Identify Sources of Differences
by Stylianos Gerontidis, Konstantinos X. Soulis, Alexandros Stavropoulos, Evangelos Nikitakis, Dionissios P. Kalivas, Orestis Kairis, Dimitrios Kopanelis, Xenofon K. Soulis and Stergia Palli-Gravani
Soil Syst. 2025, 9(4), 133; https://doi.org/10.3390/soilsystems9040133 - 25 Nov 2025
Viewed by 1293
Abstract
Soil maps are essential for managing Earth’s resources, but the accuracy of widely used global and pan-European digital soil maps in heterogeneous landscapes remains a critical concern. This study provides a comprehensive evaluation of two prominent datasets, ISRIC-SoilGrids and the European Soil Data [...] Read more.
Soil maps are essential for managing Earth’s resources, but the accuracy of widely used global and pan-European digital soil maps in heterogeneous landscapes remains a critical concern. This study provides a comprehensive evaluation of two prominent datasets, ISRIC-SoilGrids and the European Soil Data Centre (ESDAC), by comparing their soil texture predictions against the detailed Greek National Soil Map, which is based on over 10,000 field samples. The results from statistical and spatial analyses reveal significant discrepancies and weak correlations, with a very low overall accuracy for soil texture class prediction (19–21%) and high Root Mean Square Error (RMSE) values ranging from 13% to 19%. The global models failed to capture local variability, showing very low explanatory power (R2 < 0.2) and systematically underrepresenting soils with extreme textures. Furthermore, these prediction errors are not entirely random but are significantly clustered in hot spots linked to distinct parent materials and geomorphological features. Our findings demonstrate that while invaluable for large-scale assessments, the direct application of global soil databases for regional policy or precision agriculture in a geologically complex country like Greece is subject to considerable uncertainty, highlighting the critical need for local calibration and the integration of national datasets to improve the reliability of soil information. Full article
(This article belongs to the Special Issue Use of Modern Statistical Methods in Soil Science)
Show Figures

Figure 1

36 pages, 3549 KB  
Article
Feasibility of Large-Scale Electric Vehicle Deployment in Islanded Grids: The Canary Islands Case
by Alejandro García García, Víctor Rubio Matilla, Juan Diego López Arquillo and Cristiana Oliveira
Electronics 2025, 14(23), 4579; https://doi.org/10.3390/electronics14234579 - 22 Nov 2025
Viewed by 770
Abstract
The present integration of electric vehicles into everyday life has the potential to redefine current standards of urban mobility. However, the territorial impact of this deployment demands a multiscale effort to ensure both efficient and sustainable performance; this is even more necessary in [...] Read more.
The present integration of electric vehicles into everyday life has the potential to redefine current standards of urban mobility. However, the territorial impact of this deployment demands a multiscale effort to ensure both efficient and sustainable performance; this is even more necessary in a disconnected system like an island. This article addresses the possibility of transforming the existing fossil-fuel-based infrastructure within Europe’s outermost regions into an electric vehicle charging network, with particular emphasis on the Canary Islands’ strategic plans. Using official datasets from Red Eléctrica de España (REE), IDAE, and the Canary Islands’ Energy Transition Plan (PTECan), we develop three scenarios (2025 baseline, 2030, and 2040) to quantify the additional electricity demand, peak load requirements, charging infrastructure needs, and associated greenhouse gas emissions. The methodology combines EV fleet projections, the driving patterns of residents and tourists, and vehicle efficiency data to estimate yearly electricity demand and hourly charging loads. The carbon intensity profiles of each island’s grid are used to calculate well-to-wheel emissions of EVs, benchmarked against internal combustion engine vehicles. The results indicate that achieving 250,000 EVs by 2030 would increase electricity demand by 1.1–1.4 TWh/year (+8–12% of current consumption), requiring approximately 25,000–30,000 public charging points. EV emissions range from 90 to 150 gCO2/km depending on charging time, compared to 160–190 gCO2/km for ICE vehicles. Smart charging and vehicle-to-grid integration could mitigate 15–25% of peak load increases, reducing the curtailment of renewables and deferring grid investments. A comparative analysis with Zealand highlights policy synergies and differences in insular versus continental grids. The findings confirm that large-scale EV adoption in the Canary Islands is technically feasible, but quite difficult, as it requires the deep, coordinated planning of renewable expansion, storage, and a charging infrastructure. BEV WTW advantages become unequivocal once the average grid carbon intensity falls below ≈0.8–0.9 tCO2/MWh, underscoring the primacy of accelerated renewable build-out and demand-side flexibility. Despite uncertainties in adoption and technology trajectories, the approach is transparent and reproducible with official datasets, providing a transferable planning tool for other islanded systems and mainland Europe. The proposed method demonstrates its usefulness in direct linking electrification scenarios with the real capacity of the electricity system, allowing the identification of very critical integration thresholds and guiding evidence-based planning decisions. Full article
(This article belongs to the Special Issue Advances in Electric Vehicle Technology)
Show Figures

Figure 1

20 pages, 4902 KB  
Article
Site Suitability Assessment for Microalgae Plant Deployment in Saudi Arabia Using Multi-Criteria Decision Making and the Analytic Hierarchy Process: A Spatial Approach
by Mohamad Padri, Misdar Amdah, Maisarah Munirah Latief and Claudio Fuentes-Grünewald
Sustainability 2025, 17(23), 10480; https://doi.org/10.3390/su172310480 - 22 Nov 2025
Viewed by 623
Abstract
Microalgae cultivation presents a promising pathway for sustainable agricultural development in arid environments by minimizing freshwater consumption. In Saudi Arabia, where agricultural expansion coincides with extensive coastal resources, algal biotechnology has emerged as a strategic approach to optimize resource use. This study applies [...] Read more.
Microalgae cultivation presents a promising pathway for sustainable agricultural development in arid environments by minimizing freshwater consumption. In Saudi Arabia, where agricultural expansion coincides with extensive coastal resources, algal biotechnology has emerged as a strategic approach to optimize resource use. This study applies a Geographic Information System (GIS)-based framework integrating the Analytic Hierarchy Process (AHP) within a Multi-Criteria Decision-Making (MCDM) approach to evaluate the suitability of coastal zones for seawater-based microalgae cultivation. Suitability assessment incorporated topography, land use, seawater accessibility, proximity to CO2 emission sources, nutrient availability, and key environmental parameters. The analysis focused on a 24,771 km2 area of interest (AOI) extending from the coastline to the nearest highway. The results indicate that 56% of the AOI is suitable for cultivation, including 4728 km2 classified as highly suitable and 1606 km2 as very highly suitable, predominantly located near industrial CO2 sources and wastewater treatment facilities. Areas with lower suitability remain feasible for cultivation through targeted resource management. These findings highlight the significant potential for large-scale microalgae production in Saudi Arabia, contributing to sustainable biotechnology development and agricultural diversification under the country’s Vision 2030 strategy. Full article
(This article belongs to the Special Issue Agriculture, Food, and Resources for Sustainable Economic Development)
Show Figures

Figure 1

26 pages, 6770 KB  
Article
TopEros: An Integrated Hydrology and Multi-Process Erosion Model—A Comparison with MUSLE
by Emmanuel Okiria, Noda Keigo, Shin-ichi Nishimura and Yukimitsu Kobayashi
Hydrology 2025, 12(11), 309; https://doi.org/10.3390/hydrology12110309 - 20 Nov 2025
Viewed by 1222
Abstract
Hydro-erosion is a primary driver of soil degradation worldwide, yet accurate catchment-scale prediction remains challenging because sheet, gully, and raindrop-impact detachment processes operate simultaneously at sub-grid scales. We introduce TopEros, a hydro-erosion model that integrates the hydrological framework of TOPMODEL with three distinct [...] Read more.
Hydro-erosion is a primary driver of soil degradation worldwide, yet accurate catchment-scale prediction remains challenging because sheet, gully, and raindrop-impact detachment processes operate simultaneously at sub-grid scales. We introduce TopEros, a hydro-erosion model that integrates the hydrological framework of TOPMODEL with three distinct erosion modules: sheet erosion, gully erosion, and raindrop-impact detachment. TopEros employs a sub-grid zoning strategy in which each grid cell is partitioned into diffuse-flow (sheet erosion) and concentrated-flow (gully erosion) domains using threshold values of two topographic indices: the topographic index (TI) and the contributing area–slope index (aitanβ). Applied to the Namatala River catchment in eastern Uganda and calibrated with TI = 15 and aitanβ = 35, TopEros identified sheet-dominated and gully-prone areas. The simulated specific sediment yields ranged from 95 to 155 Mgha−1yr−1—classified as “high” to “very high”—with gully zones contributing disproportionately large erosion volumes. These results demonstrate the importance of capturing intra-cell heterogeneity: conventional catchment-average approaches can obscure critical erosion hotspots. By explicitly representing multiple soil detachment and transport mechanisms within a unified process-based framework, TopEros has the potential to enhance the realism of catchment-scale erosion estimates and support the precise targeting of soil and water conservation measures. Full article
Show Figures

Figure 1

34 pages, 466 KB  
Article
biLorentzFM: Hyperbolic Multi-Objective Deep Learning for Reciprocal Recommendation
by Kübra Karacan Uyar and Yücel Batu Salman
Appl. Sci. 2025, 15(22), 12340; https://doi.org/10.3390/app152212340 - 20 Nov 2025
Viewed by 720
Abstract
Reciprocal recommendation requires satisfying preferences on both sides of a match, which differs from standard one-sided settings and often involves hierarchical structure (e.g., skills, seniority, education). We present biLorentzFM, which is a multi-objective framework that integrates hyperbolic geometry into factorization machine architectures using [...] Read more.
Reciprocal recommendation requires satisfying preferences on both sides of a match, which differs from standard one-sided settings and often involves hierarchical structure (e.g., skills, seniority, education). We present biLorentzFM, which is a multi-objective framework that integrates hyperbolic geometry into factorization machine architectures using Lorentz embeddings with learnable curvature and manifold-aware optimization. The approach addresses whether a geometric structure aligned with hierarchical relationships can improve reciprocal matching without requiring major architectural changes. On a large-scale recruitment dataset from Kariyer.Net (1,150,302 interactions, 229,805 candidates), the model achieves candidate and company AUCs of 0.9964 and 0.9913 respectively, representing 6.6% and 6.0% improvements over the strongest Euclidean baseline while maintaining practical inference latency (2.1 ms per batch). Cross-validation analysis confirms robustness (5-fold: 0.9813 ± 0.0002; 3-seed: 0.9964 ± 0.0012) with very large effect sizes (Cohen’s d = 2.89–3.08). Although the per-epoch training time increases by 23.5% due to manifold operations, faster convergence (12 vs. 18 epochs) reduces the total training time by 17.8%. Cross-domain evaluation on Speed Dating data demonstrates generalization beyond explicit hierarchies with a 2.8% AUC improvement despite lacking structured taxonomies. Learned curvature parameters differ by entity type, providing interpretable indicators of hierarchical structure strength. Ablation studies isolate contributions from geometric structure (6.6%), learnable curvature (4.7%), multi-objective learning (2.1%), and explicit feature interactions (0.6%). A systematic comparison reveals that Lorentz embeddings outperform Poincaré ball implementations by 4.4% AUC under identical conditions, which is attributed to numerical stability advantages. The results indicate that pairing standard recommendation architectures with geometry reflecting hierarchical relationships can provide consistent improvements for reciprocal matching, while limitations including cold-start performance, computational overhead at an extreme scale, and static hierarchy assumptions suggest directions for future work on adaptive curvature, fairness constraints, and dynamic taxonomies. Full article
Show Figures

Figure 1

Back to TopTop