Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,993)

Search Parameters:
Keywords = analytic extension

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2382 KiB  
Article
Study of Metabolite Detectability in Simultaneous Profiling of Amine/Phenol and Hydroxyl Submetabolomes by Analyzing a Mixture of Two Separately Dansyl-Labeled Samples
by Sicheng Quan, Shuang Zhao and Liang Li
Metabolites 2025, 15(8), 496; https://doi.org/10.3390/metabo15080496 - 23 Jul 2025
Abstract
Background: Liquid chromatography-mass spectrometry (LC-MS), widely used in metabolomics, is often limited by low ionization efficiency and ion suppression, which reduce overall metabolite detectability and quantification accuracy. To address these challenges, chemical isotope labeling (CIL) LC-MS has emerged as a powerful approach, offering [...] Read more.
Background: Liquid chromatography-mass spectrometry (LC-MS), widely used in metabolomics, is often limited by low ionization efficiency and ion suppression, which reduce overall metabolite detectability and quantification accuracy. To address these challenges, chemical isotope labeling (CIL) LC-MS has emerged as a powerful approach, offering high sensitivity, accurate quantification, and broad metabolome coverage. This method enables comprehensive profiling by targeting multiple submetabolomes. Specifically, amine-/phenol- and hydroxyl-containing metabolites are labeled using dansyl chloride under distinct reaction conditions. While this strategy provides extensive coverage, the sequential analysis of each submetabolome reduces throughput. To overcome this limitation, we propose a two-channel mixing strategy to improve analytical efficiency. Methods: In this approach, samples labeled separately for the amine/phenol and hydroxyl submetabolomes are combined prior to LC-MS analysis, leveraging the common use of dansyl chloride as the labeling reagent. This integration effectively doubles throughput by reducing LC-MS runtime and associated costs. The method was evaluated using human urine and serum samples, focusing on peak pair detectability and metabolite identification. A proof-of-concept study was also conducted to assess the approach’s applicability in putative biomarker discovery. Results: Results demonstrate that the two-channel mixing strategy enhances throughput while maintaining analytical robustness. Conclusions: This method is particularly suitable for large-scale studies that require rapid sample processing, where high efficiency is essential. Full article
(This article belongs to the Special Issue Method Development in Metabolomics and Exposomics)
Show Figures

Graphical abstract

12 pages, 961 KiB  
Article
Changes in the Position of Anatomical Points, Cranio-Cervical Posture, and Nasopharyngeal Airspace Dimensions in Complete Denture Wearers—A Cephalometric Pilot Study
by Andrea Maria Chisnoiu, Mihaela Hedeșiu, Oana Chira, Iris Bara, Simona Iacob, Andreea Kui, Smaranda Buduru, Mihaela Păstrav, Mirela Fluerașu and Radu Chisnoiu
Dent. J. 2025, 13(8), 335; https://doi.org/10.3390/dj13080335 - 22 Jul 2025
Abstract
Objectives: The objective of this study was to evaluate changes in anatomical point position, cranio-cervical posture, and respiratory dimensions following conventional bimaxillary total prosthetic rehabilitation. Methods: A prospective, longitudinal, observational, analytical study was conducted on 12 patients, aged 55 to 75 years, [...] Read more.
Objectives: The objective of this study was to evaluate changes in anatomical point position, cranio-cervical posture, and respiratory dimensions following conventional bimaxillary total prosthetic rehabilitation. Methods: A prospective, longitudinal, observational, analytical study was conducted on 12 patients, aged 55 to 75 years, at the Department of Dental Prosthetics at the University of Medicine and Pharmacy in Cluj-Napoca. All patients had complete bimaxillary edentulism and received removable dentures as treatment. Clinical and cephalometric analyses were performed before and after prosthetic treatment to compare changes. The cephalometric analysis was based on the guidelines of Tweed and Rocabado for evaluation. Quantitative data were described using the mean and standard deviation for normal distribution and represented by bar graphs with error bars. A paired samples t-test was used to determine differences between groups, with a significance threshold of 0.05 for the bilateral p-value. Results: When analyzing changes in cranial base inclination, the corresponding angles exhibited an increase, indicating cephalic extension. A statistically significant difference in the anteroposterior diameter of the oropharyngeal lumen with and without bimaxillary complete dentures was identified (p < 0.05). For hyperdivergent patients, modifications in the position of anatomical features on cephalometry slightly reduced the VDO and had a slight compensatory effect on skeletal typology. In contrast, for hypodivergent patients, modifications to the position of anatomical landmarks also had a compensatory effect on skeletal typology, increasing the VDO. Conclusion: Changes in the position of anatomical features on cephalometry generally have a compensatory effect on skeletal typology after complete denture placement. Complete prosthetic treatment with removable dentures can significantly influence respiratory function by reducing the oropharyngeal lumen and body posture by cephalic extension and attenuation of the lordotic curvature of the cervical spine. Full article
(This article belongs to the Special Issue Women's Research in Dentistry)
Show Figures

Figure 1

37 pages, 4914 KiB  
Article
A Unified Machine Learning Framework for Li-Ion Battery State Estimation and Prediction
by Afroditi Fouka, Alexandros Bousdekis, Katerina Lepenioti and Gregoris Mentzas
Appl. Sci. 2025, 15(15), 8164; https://doi.org/10.3390/app15158164 (registering DOI) - 22 Jul 2025
Abstract
The accurate estimation and prediction of internal states in lithium-ion (Li-Ion) batteries, such as State of Charge (SoC) and Remaining Useful Life (RUL), are vital for optimizing battery performance, safety, and longevity in electric vehicles and other applications. This paper presents a unified, [...] Read more.
The accurate estimation and prediction of internal states in lithium-ion (Li-Ion) batteries, such as State of Charge (SoC) and Remaining Useful Life (RUL), are vital for optimizing battery performance, safety, and longevity in electric vehicles and other applications. This paper presents a unified, modular, and extensible machine learning (ML) framework designed to address the heterogeneity and complexity of battery state prediction tasks. The proposed framework supports flexible configurations across multiple dimensions, including feature engineering, model selection, and training/testing strategies. It integrates standardized data processing pipelines with a diverse set of ML models, such as a long short-term memory neural network (LSTM), a convolutional neural network (CNN), a feedforward neural network (FFNN), automated machine learning (AutoML), and classical regressors, while accommodating heterogeneous datasets. The framework’s applicability is demonstrated through five distinct use cases involving SoC estimation and RUL prediction using real-world and benchmark datasets. Experimental results highlight the framework’s adaptability, methodological transparency, and robust predictive performance across various battery chemistries, usage profiles, and degradation conditions. This work contributes to a standardized approach that facilitates the reproducibility, comparability, and practical deployment of ML-based battery analytics. Full article
35 pages, 13218 KiB  
Review
Research Advances in Nanosensor for Pesticide Detection in Agricultural Products
by Li Feng, Xiaofei Yue, Junhao Li, Fangyao Zhao, Xiaoping Yu and Ke Yang
Nanomaterials 2025, 15(14), 1132; https://doi.org/10.3390/nano15141132 - 21 Jul 2025
Abstract
Over the past few decades, pesticide application has increased significantly, driven by population growth and associated urbanization. To date, pesticide use remains crucial for sustaining global food security by enhancing crop yields and preserving quality. However, extensive pesticide application raises serious environmental and [...] Read more.
Over the past few decades, pesticide application has increased significantly, driven by population growth and associated urbanization. To date, pesticide use remains crucial for sustaining global food security by enhancing crop yields and preserving quality. However, extensive pesticide application raises serious environmental and health concerns worldwide due to its chemical persistence and high toxicity to organisms, including humans. Therefore, there is an urgent need to develop rapid and reliable analytical procedures for the quantification of trace pesticide residues to support public health management. Traditional methods, such as chromatography-based detection techniques, cannot simultaneously achieve high sensitivity, selectivity, cost-effectiveness, and portability, which limits their practical application. Nanomaterial-based sensing techniques are increasingly being adopted due to their rapid, efficient, user-friendly, and on-site detection capabilities. In this review, we summarize recent advances and emerging trends in commonly used nanosensing technologies, such as optical and electrochemical sensing, with a focus on recognition elements including enzymes, antibodies, aptamers, and molecularly imprinted polymers (MIPs). We discuss the types of nanomaterials used, preparation methods, performance, characteristics, advantages and limitations, and applications of these nanosensors in detecting pesticide residues in agricultural products. Furthermore, we highlight current challenges, ongoing efforts, and future directions in the development of pesticide detection nanosensors. Full article
(This article belongs to the Special Issue Nanosensors for the Rapid Detection of Agricultural Products)
Show Figures

Figure 1

22 pages, 2112 KiB  
Article
Cultural Diversity and the Operational Performance of Airport Security Checkpoints: An Analysis of Energy Consumption and Passenger Flow
by Jacek Ryczyński, Artur Kierzkowski, Marta Nowakowska and Piotr Uchroński
Energies 2025, 18(14), 3853; https://doi.org/10.3390/en18143853 - 20 Jul 2025
Viewed by 151
Abstract
This paper examines the operational consequences and energy demands associated with the growing cultural diversity of air travellers at airport security checkpoints. The analysis focuses on how an increasing proportion of passengers requiring enhanced security screening, due to cultural, religious, or linguistic factors, [...] Read more.
This paper examines the operational consequences and energy demands associated with the growing cultural diversity of air travellers at airport security checkpoints. The analysis focuses on how an increasing proportion of passengers requiring enhanced security screening, due to cultural, religious, or linguistic factors, affects both system throughput and energy consumption. The methodology integrates synchronised measurement of passenger flow with real-time monitoring of electricity usage. Four operational scenarios, representing incremental shares (0–15%) of passengers subject to extended screening, were modelled. The findings indicate that a 15% increase in this passenger group leads to a statistically significant rise in average power consumption per device (3.5%), a total energy usage increase exceeding 4%, and an extension of average service time by 0.6%—the cumulative effect results in a substantial annual contribution to the airport’s carbon footprint. The results also reveal a higher frequency and intensity of power consumption peaks, emphasising the need for advanced infrastructure management. The study emphasises the significance of predictive analytics, dynamic resource allocation, and the implementation of energy-efficient technologies. Furthermore, systematic intercultural competency training is recommended for security staff. These insights provide a scientific basis for optimising airport security operations amid increasing passenger heterogeneity. Full article
Show Figures

Figure 1

39 pages, 1774 KiB  
Review
FACTS Controllers’ Contribution for Load Frequency Control, Voltage Stability and Congestion Management in Deregulated Power Systems over Time: A Comprehensive Review
by Muhammad Asad, Muhammad Faizan, Pericle Zanchetta and José Ángel Sánchez-Fernández
Appl. Sci. 2025, 15(14), 8039; https://doi.org/10.3390/app15148039 - 18 Jul 2025
Viewed by 258
Abstract
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization [...] Read more.
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization of the existing infrastructure by flowing power on transmission lines near to their thermal limits. All these factors introduce problems related to power network stability, reliability, quality, congestion management, and security in restructured power systems. To overcome these problems, power-electronics-based FACTS devices are one of the beneficial solutions at present. In this review paper, the significant role of FACTS devices in restructured power networks and their technical benefits against various power system problems such as load frequency control, voltage stability, and congestion management will be presented. In addition, an extensive discussion about the comparison between different FACTS devices (series, shunt, and their combination) and comparison between various optimization techniques (classical, analytical, hybrid, and meta-heuristics) that support FACTS devices to achieve their respective benefits is presented in this paper. Generally, it is concluded that third-generation FACTS controllers are more popular to mitigate various power system problems (i.e., load frequency control, voltage stability, and congestion management). Moreover, a combination of multiple FACTS devices, with or without energy storage devices, is more beneficial compared to their individual usage. However, this is not commonly adopted in small power systems due to high installation or maintenance costs. Therefore, there is a trade-off between the selection and cost of FACTS devices to minimize the power system problems. Likewise, meta-heuristics and hybrid optimization techniques are commonly adopted to optimize FACTS devices due to their fast convergence, robustness, higher accuracy, and flexibility. Full article
(This article belongs to the Special Issue State-of-the-Art of Power Systems)
Show Figures

Figure 1

15 pages, 588 KiB  
Review
Archaeometry of Ancient Mortar-Based Materials in Roman Regio X and Neighboring Territories: A First Review
by Simone Dilaria
Minerals 2025, 15(7), 746; https://doi.org/10.3390/min15070746 - 16 Jul 2025
Viewed by 257
Abstract
This review synthesizes the corpus of archaeometric and analytical investigations focused on mortar-based materials, including wall paintings, plasters, and concrete, in the Roman Regio X and neighboring territories of northeastern Italy from the mid-1970s to the present. Organized into three principal categories—wall paintings [...] Read more.
This review synthesizes the corpus of archaeometric and analytical investigations focused on mortar-based materials, including wall paintings, plasters, and concrete, in the Roman Regio X and neighboring territories of northeastern Italy from the mid-1970s to the present. Organized into three principal categories—wall paintings and pigments, structural and foundational mortars, and flooring preparations—the analysis highlights the main methodological advances and progress in petrographic microscopy, mineralogical analysis, and mechanical testing of ancient mortars. Despite extensive case studies, the review identifies a critical need for systematic, statistically robust, and chronologically anchored datasets to fully reconstruct socio-economic and technological landscapes of this provincial region. This work offers a programmatic research agenda aimed at bridging current gaps and fostering integrated understandings of ancient construction technologies in northern Italy. The full forms of the abbreviations used throughout the text to describe the analytical equipment are provided at the end of the document in the “Abbreviations” section. Full article
Show Figures

Figure 1

16 pages, 689 KiB  
Article
Quantification of Total and Unbound Selinexor Concentrations in Human Plasma by a Fully Validated Liquid Chromatography-Tandem Mass Spectrometry Method
by Suhyun Lee, Seungwon Yang, Hyeonji Kim, Wang-Seob Shim, Eunseo Song, Seunghoon Han, Sung-Soo Park, Suein Choi, Sungpil Han, Sung Hwan Joo, Seok Jun Park, Beomjin Shin, Donghyun Kim, Hyeon Su Kim, Kyung-Tae Lee and Eun Kyoung Chung
Pharmaceutics 2025, 17(7), 919; https://doi.org/10.3390/pharmaceutics17070919 - 16 Jul 2025
Viewed by 225
Abstract
Background/Objectives: Selinexor is a selective nuclear-export inhibitor approved for hematologic malignancies, characterized by extensive plasma protein binding (>95%). However, a validated analytical method to accurately measure the clinically relevant unbound fraction of selinexor in human plasma has not yet been established. This study [...] Read more.
Background/Objectives: Selinexor is a selective nuclear-export inhibitor approved for hematologic malignancies, characterized by extensive plasma protein binding (>95%). However, a validated analytical method to accurately measure the clinically relevant unbound fraction of selinexor in human plasma has not yet been established. This study aimed to develop a fully validated bioanalytical assay for simultaneous quantification of total and unbound selinexor concentrations in human plasma. Methods: We established and fully validated an analytical method based on liquid chromatography–tandem mass spectrometry (LC-MS/MS) capable of quantifying total and unbound selinexor concentrations in human plasma. Unbound selinexor was separated using ultrafiltration, and selinexor was efficiently extracted from 50 μL of plasma by liquid–liquid extraction. Chromatographic separation was achieved on a C18 column using an isocratic mobile phase (0.1% formic acid:methanol, 12:88 v/v) with a relatively short runtime of 2.5 min. Results: Calibration curves showed excellent linearity over a range of 5–2000 ng/mL for total selinexor (r2 ≥ 0.998) and 0.05–20 ng/mL for unbound selinexor (r2 ≥ 0.995). The precision (%CV ≤ 10.35%) and accuracy (92.5–104.3%) for both analytes met the regulatory criteria. This method successfully quantified selinexor in plasma samples from renally impaired patients with multiple myeloma, demonstrating potential inter-individual differences in unbound drug concentrations. Conclusions: This validated bioanalytical assay enables precise clinical pharmacokinetic assessments in a short runtime using a small plasma volume and, thus, assists in individualized dosing of selinexor, particularly for renally impaired patients with altered protein binding. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics)
Show Figures

Figure 1

13 pages, 1294 KiB  
Article
From Complex to Quaternions: Proof of the Riemann Hypothesis and Applications to Bose–Einstein Condensates
by Jau Tang
Symmetry 2025, 17(7), 1134; https://doi.org/10.3390/sym17071134 - 15 Jul 2025
Viewed by 234
Abstract
We present novel proofs of the Riemann hypothesis by extending the standard complex Riemann zeta function into a quaternionic algebraic framework. Utilizing λ-regularization, we construct a symmetrized form that ensures analytic continuation and restores critical-line reflection symmetry, a key structural property of the [...] Read more.
We present novel proofs of the Riemann hypothesis by extending the standard complex Riemann zeta function into a quaternionic algebraic framework. Utilizing λ-regularization, we construct a symmetrized form that ensures analytic continuation and restores critical-line reflection symmetry, a key structural property of the Riemann ξ(s) function. This formulation reveals that all nontrivial zeros of the zeta function must lie along the critical line Re(s) = 1/2, offering a constructive and algebraic resolution to this fundamental conjecture. Our method is built on convexity and symmetrical principles that generalize naturally to higher-dimensional hypercomplex spaces. We also explore the broader implications of this framework in quantum statistical physics. In particular, the λ-regularized quaternionic zeta function governs thermodynamic properties and phase transitions in Bose–Einstein condensates. This quaternionic extension of the zeta function encodes oscillatory behavior and introduces critical hypersurfaces that serve as higher-dimensional analogues of the classical critical line. By linking the spectral features of the zeta function to measurable physical phenomena, our work uncovers a profound connection between analytic number theory, hypercomplex geometry, and quantum field theory, suggesting a unified structure underlying prime distributions and quantum coherence. Full article
Show Figures

Figure 1

23 pages, 3820 KiB  
Article
A Fundamental Statistics Self-Learning Method with Python Programming for Data Science Implementations
by Prismahardi Aji Riyantoko, Nobuo Funabiki, Komang Candra Brata, Mustika Mentari, Aviolla Terza Damaliana and Dwi Arman Prasetya
Information 2025, 16(7), 607; https://doi.org/10.3390/info16070607 - 15 Jul 2025
Viewed by 222
Abstract
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis [...] Read more.
The increasing demand for data-driven decision making to maintain the innovations and competitiveness of organizations highlights the need for data science educations across academia and industry. At its core is a solid understanding of statistics, which is necessary for conducting a thorough analysis of data and deriving valuable insights. Unfortunately, conventional statistics learning often lacks practice in real-world applications using computer programs, causing a separation between conceptual knowledge of statistics equations and their hands-on skills. Integrating statistics learning into Python programming can convey an effective solution for this problem, where it has become essential in data science implementations, with extensive and versatile libraries. In this paper, we present a self-learning method for fundamental statistics through Python programming for data science studies. Unlike conventional approaches, our method integrates three types of interactive problems—element fill-in-blank problem (EFP), grammar-concept understanding problem (GUP), and value trace problem (VTP)—in the Programming Learning Assistant System (PLAS). This combination allows students to write code, understand concepts, and trace the output value while obtaining instant feedback so that they can improve retention, knowledge, and practical skills in learning statistics using Python programming. For evaluations, we generated 22 instances using source codes for fundamental statistics topics, and assigned them to 40 first-year undergraduate students at UPN Veteran Jawa Timur, Indonesia. Statistics analytical methods were utilized to analyze the student learning performances. The results show that a significant correlation (ρ<0.05) exists between the students who solved our proposal and those who did not. The results confirm that it can effectively assist students in learning fundamental statistics self-learning using Python programming for data science implementations. Full article
Show Figures

Figure 1

20 pages, 3414 KiB  
Article
Improvement in the Interception Vulnerability Level of Encryption Mechanism in GSM
by Fawad Ahmad, Reshail Khan and Armel Asongu Nkembi
Inventions 2025, 10(4), 56; https://doi.org/10.3390/inventions10040056 - 14 Jul 2025
Viewed by 216
Abstract
Data security is of the utmost importance in the domain of real-time environmental monitoring systems, particularly when employing advanced context-aware intelligent visual analytics. This paper addresses a significant deficiency in the Global System for Mobile Communications (GSM), a widely employed wireless communication system [...] Read more.
Data security is of the utmost importance in the domain of real-time environmental monitoring systems, particularly when employing advanced context-aware intelligent visual analytics. This paper addresses a significant deficiency in the Global System for Mobile Communications (GSM), a widely employed wireless communication system for environmental monitoring. The A5/1 encryption technique, which is extensively employed, ensures the security of user data by utilizing a 64-bit session key that is divided into three linear feedback shift registers (LFSRs). Despite the shown efficacy, the development of a probabilistic model for assessing the vulnerability of breaking or intercepting the session key (Kc) has not yet been achieved. In order to bridge this existing knowledge gap, this study proposes a probabilistic model that aims to evaluate the security of encrypted data within the framework of the Global System for Mobile Communications (GSM). The proposed model implements alterations to the current GSM encryption process by the augmentation of the quantity of Linear Feedback Shift Registers (LFSRs), consequently resulting in an improved level of security. The methodology entails increasing the number of registers while preserving the session key’s length, ensuring that the key length specified by GSM standards remains unaltered. This is especially important for environmental monitoring systems that depend on real-time data analysis and decision-making. In order to elucidate the notion, this analysis considers three distinct scenarios: encryption utilizing a set of five, seven, and nine registers. The majority function is employed to determine the registers that will undergo perturbation, hence increasing the complexity of the bit arrangement and enhancing the security against prospective attackers. This paper provides actual evidence using simulations to illustrate that an increase in the number of registers leads to a decrease in the vulnerability of data interception, hence boosting data security in GSM communication. Simulation results demonstrate that our method substantially reduces the risk of data interception, thereby improving the integrity of context-aware intelligent visual analytics in real-time environmental monitoring systems. Full article
Show Figures

Figure 1

23 pages, 2709 KiB  
Review
Digital Technologies in Urban Regeneration: A Systematic Literature Review from the Perspectives of Stakeholders, Scales, and Stages
by Xiaer Xiahou, Xingyuan Ding, Peng Chen, Yuchong Qian and Hongyu Jin
Buildings 2025, 15(14), 2455; https://doi.org/10.3390/buildings15142455 - 12 Jul 2025
Viewed by 340
Abstract
Urban regeneration, as a key strategy for promoting sustainable development of urban areas, requires innovative digital technologies to address increasingly complex urban challenges in its implementation. With the fast advancement of digital technologies such as artificial intelligence (AI), Internet of Things (IoT), and [...] Read more.
Urban regeneration, as a key strategy for promoting sustainable development of urban areas, requires innovative digital technologies to address increasingly complex urban challenges in its implementation. With the fast advancement of digital technologies such as artificial intelligence (AI), Internet of Things (IoT), and big data, these technologies have extensively penetrated various dimensions of urban regeneration, from planning and design to implementation and post-operation management, providing new possibilities for improving urban regeneration efficiency and quality. However, the existing literature lacks a systematic evaluation of technology application patterns across different project scales and phases, comprehensive analysis of stakeholder–technology interactions, and quantitative assessment of technology distribution throughout the urban regeneration lifecycle. This research gap limits the in-depth understanding of how digital technologies can better support urban regeneration practices. This study aims to identify and quantify digital technology application patterns across urban regeneration stages, scales, and stakeholder configurations through systematic analysis of 56 high-quality articles from the Scopus and Web of Science databases. Using a mixed-methods approach combining a systematic literature review, bibliometric analysis, and meta-analysis, we categorized seven major digital technology types and analyzed their distribution patterns. Key findings reveal distinct temporal patterns: GIS and BIM/CIM technologies dominate in the pre-urban regeneration (Pre-UR) stage (10% and 12% application proportions, respectively). GIS applications increase significantly to 14% in post-urban regeneration (Post-UR) stage, while AI technology remains underutilized across all phases (2% in Pre-UR, decreasing to 1% in Post-UR). Meta-analysis reveals scale-dependent technology adoption patterns, with different technologies showing varying effectiveness at building-level, district-level, and city-level implementations. Research challenges include stakeholder digital divides, scale-dependent adoption barriers, and phase-specific implementation gaps. This study constructs a multi-dimensional analytical framework for digital technology support in urban regeneration, providing quantitative evidence for optimizing technology selection strategies. The framework offers practical guidance for policymakers and practitioners in developing context-appropriate digital technology deployment strategies for urban regeneration projects. Full article
Show Figures

Figure 1

14 pages, 16727 KiB  
Article
Well Begun Is Half Done: The Impact of Pre-Processing in MALDI Mass Spectrometry Imaging Analysis Applied to a Case Study of Thyroid Nodules
by Giulia Capitoli, Kirsten C. J. van Abeelen, Isabella Piga, Vincenzo L’Imperio, Marco S. Nobile, Daniela Besozzi and Stefania Galimberti
Stats 2025, 8(3), 57; https://doi.org/10.3390/stats8030057 - 10 Jul 2025
Cited by 1 | Viewed by 175
Abstract
The discovery of proteomic biomarkers in cancer research can be effectively performed in situ by exploiting Matrix-Assisted Laser Desorption Ionization (MALDI) Mass Spectrometry Imaging (MSI). However, due to experimental limitations, the spectra extracted by MALDI-MSI can be noisy, so pre-processing steps are generally [...] Read more.
The discovery of proteomic biomarkers in cancer research can be effectively performed in situ by exploiting Matrix-Assisted Laser Desorption Ionization (MALDI) Mass Spectrometry Imaging (MSI). However, due to experimental limitations, the spectra extracted by MALDI-MSI can be noisy, so pre-processing steps are generally needed to reduce the instrumental and analytical variability. Thus far, the importance and the effect of standard pre-processing methods, as well as their combinations and parameter settings, have not been extensively investigated in proteomics applications. In this work, we present a systematic study of 15 combinations of pre-processing steps—including baseline, smoothing, normalization, and peak alignment—for a real-data classification task on MALDI-MSI data measured from fine-needle aspirates biopsies of thyroid nodules. The influence of each combination was assessed by analyzing the feature extraction, pixel-by-pixel classification probabilities, and LASSO classification performance. Our results highlight the necessity of fine-tuning a pre-processing pipeline, especially for the reliable transfer of molecular diagnostic signatures in clinical practice. We outline some recommendations on the selection of pre-processing steps, together with filter levels and alignment methods, according to the mass-to-charge range and heterogeneity of data. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
Show Figures

Graphical abstract

32 pages, 2917 KiB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Viewed by 219
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 3458 KiB  
Article
Comparative Analysis and Performance Evaluation of SSC, n-SAC, and Creep-SCLAY1S Soil Creep Models in Predicting Soil Settlement
by Tulasi Ram Bhattarai, Netra Prakash Bhandary and Gustav Grimstad
Geotechnics 2025, 5(3), 47; https://doi.org/10.3390/geotechnics5030047 - 9 Jul 2025
Viewed by 160
Abstract
The precise prediction of soil settlement under applied loads is of paramount importance in the field of geotechnical engineering. Conventional analytical approaches often lack the capacity to accurately represent the rate-dependent deformations exhibited by soft soils. Creep affects the integrity of geotechnical structures [...] Read more.
The precise prediction of soil settlement under applied loads is of paramount importance in the field of geotechnical engineering. Conventional analytical approaches often lack the capacity to accurately represent the rate-dependent deformations exhibited by soft soils. Creep affects the integrity of geotechnical structures and can lead to loss of serviceability or even system failure. Over time, they deform, the soil structure can be weakened, and consequently, the risk of collapse increases. Despite extensive research, regarding the creep characteristics of soft soils, the prediction of creep deformation remains a substantial challenge. This study explores soil consolidation settlement by employing three different material models: the Soft Soil Creep (SSC) model implemented in PLAXIS 2D, alongside two user-defined elasto-viscoplastic models, specifically Creep-SCLAY1S and the non-associated creep model for Structured Anisotropic Clay (n-SAC). Through the simulation of laboratory experiments and the Lilla Mellösa test embankment situated in Sweden, the investigation evaluates the strengths and weaknesses of these models. The results demonstrate that the predictions produced by the SSC, n-SAC, and Creep-SCLAY1S models are in close correspondence with the field observations, in contrast to the more simplistic elastoplastic model. The n-SAC and Creep-SCLAY1S models adeptly represent the stress–strain response in CRS test simulations; however, they tend to over-predict horizontal deformations in field assessments. Further investigation is advisable to enhance the ease of use and relevance of these sophisticated models. Full article
(This article belongs to the Special Issue Recent Advances in Geotechnical Engineering (2nd Edition))
Show Figures

Figure 1

Back to TopTop