Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,319)

Search Parameters:
Keywords = model hierarchy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5990 KB  
Article
A Study on the Evaluation of Symbiotic Levels and Development Strategies for Clustered Traditional Villages in Tourism, Based on Symbiosis Theory: A Case Study of Jia County, Shaanxi Province
by Yue Shang, Zhonghua Zhang, Jiawen Fang and Minghui Liu
Sustainability 2026, 18(9), 4215; https://doi.org/10.3390/su18094215 (registering DOI) - 23 Apr 2026
Abstract
Protecting and preserving the agricultural heritage, folk culture and ecological environment of traditional villages is a key element in advancing the strategy for comprehensive rural revitalisation. This paper constructs a theoretical framework for tourism symbiosis, examines the level of tourism symbiosis in the [...] Read more.
Protecting and preserving the agricultural heritage, folk culture and ecological environment of traditional villages is a key element in advancing the strategy for comprehensive rural revitalisation. This paper constructs a theoretical framework for tourism symbiosis, examines the level of tourism symbiosis in the 13 national-level traditional villages of Jia County, and proposes strategies for tourism development. This study employs the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method, alongside spatial analysis techniques such as the Hotspot Analysis, to reveal the levels of tourism symbiosis in traditional villages and their spatial distribution. The results indicate that traditional villages are distributed along the Yellow River, with a linear clustering pattern particularly evident in the central region of Jia County; the overall level of symbiosis exhibits a spatial pattern of higher levels in the north and lower levels in the south, with uneven levels across various dimensions; The traditional villages are categorised into four symbiotic models: comprehensive advantage-led, cultural corridor-dependent, ecological and cultural tourism potential, and low-development conservation. Based on these categories, strategies are proposed to deepen the exploration of local culture, promote industrial integration and regional collaboration, prioritise ecological conservation and environmental restoration, and establish distinctive brands through the rational utilisation of surrounding resources. The research framework and conclusions of this paper provide methodological references and practical insights for the concentrated and contiguous protection of traditional villages, as well as for research on rural revitalisation and sustainable development. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

17 pages, 437 KB  
Review
A Solution of the Scalar Nonet Mass Puzzle
by Mihail Chizhov, Emanuil Chizhov, Daniela Kirilova and Momchil Naydenov
Particles 2026, 9(2), 44; https://doi.org/10.3390/particles9020044 (registering DOI) - 23 Apr 2026
Abstract
We present a short review dedicated to low-lying meson states. We present all meson nonets, which consist from up, down and strange light quarks. We consider the scalar nonet as a basic nonet. We work in the framework of the massless Nambu–Jona-Lasinio [...] Read more.
We present a short review dedicated to low-lying meson states. We present all meson nonets, which consist from up, down and strange light quarks. We consider the scalar nonet as a basic nonet. We work in the framework of the massless Nambu–Jona-Lasinio UR(3)×UL(3) quark model. The collective meson states are described through initially bare quark–antiquark pairs, whose condensates lead simultaneously to spontaneous breaking of the chiral and the flavour symmetry. After quantisation and the spontaneous breaking of the chiral symmetry, when quarks obtain constituent nonzero masses, they become dressed. We present an explanation of the inverse mass hierarchy of the low-lying nonet of the scalar mesons. The proposed explanation is based on symmetry principles. It is shown that, due to the flavour symmetry breaking, two isodoublets of K0*(700) mesons play the role of Goldstone bosons. It is also proven that there exists a solution with almost degenerate masses of the a0(980) and f0(980) mesons and a zero mass of the f0(500) meson. Short description of the physical properties of other meson nonets is provided. In particular unique mass relations among the different nonets, which are experimentally confirmed, are presented. Full article
21 pages, 1596 KB  
Article
Integration of Building Information Modelling and Economic Multi-Criteria Decision-Making with Neural Networks: Towards a Smart Renewable Energy Community
by Helena M. Ramos, Ana Paula Falcao, Praful Borkar, Oscar E. Coronado-Hernández, Francisco-Javier Sánchez-Romero and Modesto Pérez-Sánchez
Algorithms 2026, 19(5), 327; https://doi.org/10.3390/a19050327 - 23 Apr 2026
Abstract
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated [...] Read more.
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated modelling and decision-making. The approach is applied to a hydropower site, evaluating five Scenarios (IDs 1–5) under a Community and Industry model. Financial benchmarks include a 10% Minimum Required Return and a 7-year payback period. ID3—hydropower, solar, and wind—proves most effective, with ANPV of €10,905 (wet) and €4501 (dry), and ROI of 155%/64%. Its ROIA/MRA Index peaks at 539%, and Payback/N ratios remain within acceptable limits (55%/96%). LCOE stays stable in average conditions (0.042–0.046 €/kWh), rising in dry years (0.07–0.10 €/kWh). Profitability differences primarily stem from demand and curtailment, rather than production costs. The NARX neural network reliably models SS% values from renewable inputs with low error across scenarios. The integrated BIM–EMCDM framework ensures transparent, sustainable, and risk-balanced energy system decisions for long-term autonomy. Full article
41 pages, 2276 KB  
Article
How to Optimize Prefabricated Staircase Construction Cost Prediction? GAN-SHAP-MLP Hybrid Architecture: Mechanism and Verification
by Lei Zhang, Bowen Sun and Guangqing Li
Buildings 2026, 16(9), 1661; https://doi.org/10.3390/buildings16091661 - 23 Apr 2026
Abstract
Existing studies conduct general cost analyses for prefabricated components, yet structural heterogeneity results in distinct cost drivers. Most studies concentrate on the technical performance of prefabricated staircases, with insufficient investigation into dedicated cost-estimation methods. This study establishes a hybrid prediction framework integrating GAN-based [...] Read more.
Existing studies conduct general cost analyses for prefabricated components, yet structural heterogeneity results in distinct cost drivers. Most studies concentrate on the technical performance of prefabricated staircases, with insufficient investigation into dedicated cost-estimation methods. This study establishes a hybrid prediction framework integrating GAN-based data augmentation and SHAP-empowered Multilayer Perceptron (SHAP-MLP) modeling, using prefabricated straight staircases as empirical objects for multidimensional analysis. Total cost is classified into production, transportation, and on-site installation phases, followed by systematic screening of 33 influencing factors for predictive modeling. The Analytic Hierarchy Process (AHP), with a 1–9 scale, is adopted to quantify indicator weights and prioritize features. Triple verification (multi-expert consistency test, group opinion coordination test, and sensitivity analysis) removes five weakly correlated parameters to form a preliminary indicator system. Based on 240 original engineering data samples, the GAN generates 60 high-fidelity synthetic samples. Distribution consistency between synthetic and original data is validated via the Kolmogorov–Smirnov (KS) test, p-value verification, and kernel density estimation (KDE). SHAP interpretability analysis identifies four core determinants: prefabrication rate, total staircase area, standardization level, and number of floors. Eight low-impact parameters are excluded to optimize model input, leaving 20 validated indicators. The GAN-SHAP-MLP model maintains superior performance in testing, with a test-set RMSE of 49.538, representing improvements of 41.3%, 22.5%, and 25.7% over LSTM (89.33), CNN (67.59), and standard MLP (70.56), respectively. The difference between its test-set and overall R2 is only 0.69%, significantly lower than 2.06% for LSTM and 5.47% for MLP. Empirical validation with real engineering cases from four different regions further confirms the model’s high prediction accuracy, with a minimum error of only 1.49%. The integration of data augmentation and interpretable deep learning provides a high-precision, interpretable cost prediction tool for prefabricated straight staircases, promoting methodological progress in construction economics. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
28 pages, 1429 KB  
Article
Engineering Systems with Standards and Digital Models: Specifying Stakeholder Needs and Capabilities—MGOS
by Kevin MacG. Adams, Irfan Ibrahim and Steven L. Krahn
Systems 2026, 14(5), 458; https://doi.org/10.3390/systems14050458 - 23 Apr 2026
Abstract
This paper proposes a formal method and associated techniques for completing the ISO/IEC/IEEE Standard 15288 technical process 6.4.2—Stakeholder Needs and Requirements definition within the 15288-SysML Grid framework. The paper is a companion work to Engineering Systems with Standards and Digital Models: Development of [...] Read more.
This paper proposes a formal method and associated techniques for completing the ISO/IEC/IEEE Standard 15288 technical process 6.4.2—Stakeholder Needs and Requirements definition within the 15288-SysML Grid framework. The paper is a companion work to Engineering Systems with Standards and Digital Models: Development of a 15288-SysML Grid, which describes an engineering design method that supports the tenets of the Industry 4.0 paradigm. The formal method presented here is grounded using established constructs from systems science; specifically, the systems principles of hierarchy, emergence, requisite parsimony, minimum critical specification, and requisite saliency. The application of accepted principles ensures that stakeholders are able to objectively specify measurable criteria that can satisfy stakeholder needs and capabilities. The method uses: (1) international standards for systems (e.g., ISO/IEC/IEEE 15288); (2) adopts the four fundamental aspects of system design supported by model-based systems engineering (MBSE); (3) invokes the international standard for the systems modeling language (SysML); and (4) adopts a hierarchical requirements tree that specifies Mission, Goals, Objectives, and Sub-objectives (MGOS) to provide the stakeholder-analysis process a means for articulating system-level engineering requirements. Utilization of the MGOS framework is intended to have a positive impact on the system design process by ensuring reproducibility, replicability, transparency, and generalization. Full article
(This article belongs to the Special Issue Model-Based Systems Engineering (MBSE) for Complex Systems)
20 pages, 8882 KB  
Article
Assessing Soil Vulnerability to Water Erosion Under Dam Releases Using a Multi-Criteria Approach: Case of the Sidi Aich Basin, Southwestern Tunisia
by Fatma Karaouli, Mongi Ben Zaied, Nadia Khelif, Zaineb Ali, Fethi Abdelli, Houda Besser, Latifa Dhaouedi and Mohamed Ouessar
Soil Syst. 2026, 10(5), 51; https://doi.org/10.3390/soilsystems10050051 - 23 Apr 2026
Abstract
Soil erosion is a significant environmental concern in arid regions, particularly in dam-regulated watersheds, where intermittent flows from sprinkler irrigation can exacerbate land degradation. This study assesses soil erosion susceptibility in the Sidi Aich watershed using a combined approach of the Revised Universal [...] Read more.
Soil erosion is a significant environmental concern in arid regions, particularly in dam-regulated watersheds, where intermittent flows from sprinkler irrigation can exacerbate land degradation. This study assesses soil erosion susceptibility in the Sidi Aich watershed using a combined approach of the Revised Universal Soil Loss Equation (RUSLE) and the Analytic Hierarchy Process (AHP), enabling the integration of both regional characteristics and expert-driven weighting. The RUSLE model accounts for natural and human-induced factors, whereas AHP provides a hierarchical weighting system that highlights rainfall erosivity and the local impacts of dam-regulated discharges. Results show that 26.12% of the area falls into the very high susceptibility category, 25.45% into high, 23.91% into moderate, and 24.51% into low susceptibility. Model validation demonstrates satisfactory predictive performance, with Area Under the Curve (AUC) values of 0.85 for AHP and 0.78 for RUSLE. Overall, the findings emphasize the critical role of dam-controlled releases in increasing soil vulnerability, a factor that may not be fully captured when using RUSLE alone. By combining RUSLE and AHP, this research provides a more realistic and regionally tailored assessment of erosion risk, offering valuable guidance for watershed management and erosion mitigation strategies in arid environments. Full article
Show Figures

Figure 1

20 pages, 2578 KB  
Article
A Fuzzy Decision-Making Control Chart for Multicriteria Quality Evaluation in Industrial Processes
by Luis Fernando Villanueva-Jiménez, Rosa Jazmín Trasviña-Osorio, Juan De Anda-Suárez, Jose Luis Lopez Ramirez, Guillermo García-Rodríguez and José Ruíz-Tamayo
Appl. Sci. 2026, 16(9), 4111; https://doi.org/10.3390/app16094111 - 22 Apr 2026
Abstract
Quality evaluation in production systems represents a significant challenge in the manufacturing industry, particularly in environments where expert judgment plays a key role in managing the inherent uncertainty of the production system. This study proposes a fuzzy multicriteria decision-making control chart, termed Fuzzy [...] Read more.
Quality evaluation in production systems represents a significant challenge in the manufacturing industry, particularly in environments where expert judgment plays a key role in managing the inherent uncertainty of the production system. This study proposes a fuzzy multicriteria decision-making control chart, termed Fuzzy Decision-Making Control Chart based on AHP-Extent and Triangular Fuzzy Numbers (FDMCC-AHPE). The method integrates expert knowledge through triangular fuzzy numbers and a Fuzzy Analytic Hierarchy Process supported by Extent Analysis, to define fuzzy decision intervals for quality assessment and subsequently perform a structured analysis to classify the product within a control chart framework. In this framework, expert judgments expressed through linguistic evaluations are systematically translated into triangular fuzzy numbers and processed using FAHP–Extent Analysis, allowing the aggregation of subjective assessments within a structured mathematical decision model. The proposed method was validated in a tannery company, specifically in the retanning process. The industrial case study considers both qualitative criteria, such as surface defects and color uniformity, and quantitative process variables that include bath pH, treatment duration, and processing temperature. The results were compared with an empirical expert-based evaluation and a structured expert assessment supported by a multicriteria decision-making method. The findings demonstrate that the FDMCC-AHPE exhibits greater sensitivity in discriminating between quality states under uncertain evaluation conditions, particularly when samples involve complex evaluation conditions. Full article
Show Figures

Figure 1

35 pages, 13759 KB  
Article
BioLAMR: A Biomimetically Inspired Large Language Model Adaptation Framework for Automatic Modulation Recognition
by Yubo Mao, Wei Xu, Jijia Sang and Haoan Liu
Biomimetics 2026, 11(4), 288; https://doi.org/10.3390/biomimetics11040288 - 21 Apr 2026
Abstract
Automatic modulation recognition (AMR) is increasingly relevant to communication-sensing front ends in robotic and human–robot collaborative systems, where reliable spectrum awareness and adaptive wireless reception are desired. However, existing methods often degrade sharply at low signal-to-noise ratios (SNRs), and large language models (LLMs) [...] Read more.
Automatic modulation recognition (AMR) is increasingly relevant to communication-sensing front ends in robotic and human–robot collaborative systems, where reliable spectrum awareness and adaptive wireless reception are desired. However, existing methods often degrade sharply at low signal-to-noise ratios (SNRs), and large language models (LLMs) are not natively compatible with continuous I/Q signals due to the inherent modality gap. We propose BioLAMR, a GPT-2 adaptation framework for AMR inspired by the auditory system’s parallel time–frequency processing and cortical hierarchy. The framework combines bio-inspired dual-domain feature extraction with parameter-efficient LLM adaptation. BioLAMR includes three components. First, a lightweight dual-domain fusion (LDDF) module extracts complementary time- and frequency-domain features and fuses them through channel and spatial attention. Second, a convolutional embedding module converts continuous I/Q signals into GPT-2-compatible sequences without discrete tokenization. Third, a hierarchical fine-tuning strategy updates only 8.9% of parameters to preserve pretrained knowledge while adapting to modulation recognition. Experiments on the RadioML2016.10a and RadioML2016.10b benchmarks show that BioLAMR achieves overall accuracies of 64.99% and 67.43%, outperforming the strongest competing method by 2.60 and 2.47 percentage points, respectively. Under low-SNR conditions, it reaches 36.78% and 38.14%, the best results among the compared methods. Ablation studies verify the contribution of each component. These results demonstrate that combining dual-domain signal modeling with parameter-efficient GPT-2 adaptation is an effective route to robust AMR in challenging wireless environments. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
23 pages, 384 KB  
Article
Cues for a Grammar of Potentials in Markov Field Models of Computer Vision
by Luigi Burigana
Appl. Sci. 2026, 16(8), 4030; https://doi.org/10.3390/app16084030 - 21 Apr 2026
Abstract
Several well-known models in present-day computer vision take the form of Markov random fields. Any model of this kind amounts to a network of soft constraints, which are called potentials. These are the subject of this study. First, three kinds of information that [...] Read more.
Several well-known models in present-day computer vision take the form of Markov random fields. Any model of this kind amounts to a network of soft constraints, which are called potentials. These are the subject of this study. First, three kinds of information that are involved in any computer vision inference task are identified, namely, evidence, target, and principled information, and the concept of a variable as applied in this context is discussed. The general meaning of a potential is then described, which is a local soft constraint that aims to promote a corresponding desired condition. Following this, the formal structure of a potential is highlighted, which includes a set of parameters and an analytic frame, with this being a hierarchy of operations by which the value of the potential can be computed. The possible presence of a core in the analytic frame is considered, and two salient kinds of cores are distinguished and illustrated using examples from the literature: one involving a distance function and the other given by a probabilistic conditional. In summary, this contribution highlights substantial aspects of the semantics and syntax of potentials in Markov field models of computer vision, and constructs a framework within which these aspects may be consistently arranged and explained. Full article
Show Figures

Figure 1

42 pages, 10596 KB  
Systematic Review
Measurement and Modeling of Sustainable Food Choice and Purchasing Behavior: A Systematic Review of Methods and Models
by Tiago Negrão Andrade and Helena Maria André Bolini
Foods 2026, 15(8), 1442; https://doi.org/10.3390/foods15081442 - 21 Apr 2026
Abstract
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This [...] Read more.
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This integrative systematic review of 62 empirical studies across psychometric validation, discrete choice experiments (DCEs), trust and cognitive biases, and objective behavioral measurement diagnoses the structural disarticulation between these traditions as the primary cause of limited predictive validity. Findings reveal a pronounced inversion of the evidence hierarchy: while self-report studies report moderate attitude–behavior correlations (β ≈ 0.40–0.50, self-report), the only long-term study using objective scanner data demonstrates that this relationship collapses to a virtually null effect (β = 0.022), representing a 95.6% decay in predictive capacity. Psychometric instruments demonstrate strong structural validity but lack ecological validation against actual purchases. DCEs have evolved econometrically (from MNL to GMNL models), yet remain isolated from psychological theory and real-world validation. Critically, no reviewed study integrated validated scales, a DCE, and objective behavioral data within a single design. Key moderators—skepticism, halo effects, and affective heuristics—are systematically underoperationalized. To overcome this impasse, we propose Hybrid Choice Models (HCM) as the central tool to formally articulate latent attitudes, stated preferences, and observed behavior, enabling cumulative evidence to inform policy and market strategies with greater predictive accuracy. These findings indicate that predictive advances depend on integrating measurement paradigms to achieve ecologically valid and policy-relevant models of sustainable consumer behavior. Full article
Show Figures

Figure 1

43 pages, 3956 KB  
Article
Meta-Identity and Algorithmic Mediation on Digital Platforms: A Comparative Analysis of AI–Human Content Categorization
by Allan Herison Ferreira, Ana Carolina Trevisan, Carla Maria Baptista, Rubén Ramos-Antón, Álvaro Augusto Comin, Henrique F. Carvalho, Silvestre Vendrell and Valéria Oliveira Sá
Societies 2026, 16(4), 132; https://doi.org/10.3390/soc16040132 - 20 Apr 2026
Abstract
This article examines how algorithmic classification systems participate in the production of meta-identities, understood as operational classificatory constructs that mediate the visibility, circulation, and interpretation of digital content and its authors. The study employs a mixed-methods design combining controlled analytical simulation with qualitative [...] Read more.
This article examines how algorithmic classification systems participate in the production of meta-identities, understood as operational classificatory constructs that mediate the visibility, circulation, and interpretation of digital content and its authors. The study employs a mixed-methods design combining controlled analytical simulation with qualitative interpretive analysis, systematic thematic coding, and comparative statistical procedures. Empirical data are derived from the analysis of 150 audiovisual works produced in formative workshops and interpreted by four types of agents: authors, peers, specialized human analysts, and two Large Language Model-based AI systems (ChatGPT and Gemini). Interpretations were analyzed across micro, meso, and macro levels, using a consolidated system of thematic categories with hierarchical weighting and normalization procedures to ensure inter-agent comparability. The results demonstrate a systematic and structural divergence between human and algorithmic classifications. While human agents preserve semantic plurality and contextual anchoring, AI systems tend to reorganize thematic hierarchies through semantic aggregation and stabilization, thereby privileging broad, reusable categories. This process produces recurring, opaque classificatory patterns that serve as infrastructural references for subsequent algorithmic decisions. The article contributes methodologically by offering a replicable framework for comparing human and algorithmic regimes of meaning production in digital environments. Full article
(This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society)
Show Figures

Figure 1

50 pages, 56524 KB  
Review
Toward Digital Twins in 3D IC Packaging: A Critical Review of Physics, Data, and Hybrid Architectures
by Gourab Datta, Sarah Safura Sharif and Yaser Mike Banad
Electronics 2026, 15(8), 1740; https://doi.org/10.3390/electronics15081740 - 20 Apr 2026
Abstract
Three-dimensional integrated circuit (3D IC) packaging and heterogeneous integration have emerged as central pillars of contemporary semiconductor scaling. Yet, the multi-physics coupling inherent to stacked architectures manifesting as thermal hot spots, warpage-induced stresses, and interconnect aging demands monitoring and control capabilities that surpass [...] Read more.
Three-dimensional integrated circuit (3D IC) packaging and heterogeneous integration have emerged as central pillars of contemporary semiconductor scaling. Yet, the multi-physics coupling inherent to stacked architectures manifesting as thermal hot spots, warpage-induced stresses, and interconnect aging demands monitoring and control capabilities that surpass traditional offline metrology. Although Digital Twin (DT) technology provides a principled route to real-time reliability management, the existing literature remains fragmented and frequently blurs the distinction between static multi-physics simulation workflows and truly dynamic, closed-loop twins. This critical review addresses these deficiencies through three main contributions. First, we clarify the Digital Twin hierarchy to resolve terminological ambiguity between digital models, shadows, and twins. Second, we synthesize three foundational enabling technologies. We examine physics-based modeling, emphasizing the shift from finite-element analysis (FEA) to real-time surrogates. We analyze data-driven paradigms, highlighting virtual metrology (VM) for inferring latent metrics. Finally, we explore in situ sensing, which serves as the “nervous system” coupling the physical stack to its virtual counterpart. Third, beyond a descriptive survey, we outline a possible hybrid DT architecture that leverages physics-informed machine learning (e.g., PINNs) to help reconcile data scarcity with latency constraints. Finally, we outline a standards-aligned roadmap incorporating IEEE 1451 and UCIe protocols to support the transition from passive digital shadows toward more adaptive and fully coupled Digital Twin frameworks for 3D IC manufacturing and field operation. Full article
35 pages, 1598 KB  
Review
Sensors and Mass Spectrometry Connection for Food Analysis: A Systematic Review of Methodological Synergies
by Fabiola Eugelio, Marcello Mascini, Federico Fanti, Sara Palmieri and Michele Del Carlo
Chemosensors 2026, 14(4), 100; https://doi.org/10.3390/chemosensors14040100 - 20 Apr 2026
Abstract
Background: Sensors and mass spectrometry (MS) are frequently used in combination for food safety and quality assessment, yet their functional integration lacks a formal methodological framework. This review categorizes the synergies between these technologies into distinct Relational Connections. Methodology: Following Preferred Reporting Items [...] Read more.
Background: Sensors and mass spectrometry (MS) are frequently used in combination for food safety and quality assessment, yet their functional integration lacks a formal methodological framework. This review categorizes the synergies between these technologies into distinct Relational Connections. Methodology: Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 155 original research articles published between 2015 and 2025 were systematically analyzed. Records were identified via the Scopus database within the food science domain. Experimental meta-data, including extraction protocols, instrumental configurations (ionization source, mass analyzer, cost tier), and chemometric strategies, were extracted to identify core methodological patterns. Statistical associations were quantified using chi-squared tests with Cramer’s V effect sizes. Results: Five Relational Connections were identified: (1) MS as reference for sensor validation (25.2%); (2) MS-sensor correlative analysis (10.3%); (3) MS quantifying data to train predictive sensor models (6.5%); (4) MS identifying targets for sensor detection (7.1%); and (5) MS enabling sensor classification models (51.0%). Technology pairing is governed by a three-level hierarchy: analyte polarity determines the ionization source (V = 0.69), required precision determines the mass analyzer (V = 0.64), and cost/availability constraints shape the practical integration strategy. Gas Chromatography (GC)-MS is predominantly coupled with Electronic Noses for volatile profiling (86% of classification studies), while Liquid Chromatography-Tandem Mass Spectrometry (LC-MS/MS) pairs with biosensors for contaminant analysis (74% of reference validation studies). Systematic analysis of the full pairing matrix reveals that 75% of theoretically possible MS-sensor combinations remain unexplored or underrepresented, identifying both technical boundaries and innovation frontiers. Discussion: The findings clarify the strategic logic behind technology pairings, demonstrating that MS provides the quantitative molecular data required for sensor training. The hierarchical decision framework and identification of underexplored pairings provide an evidence-based guide for designing future integrated food analysis systems. Full article
52 pages, 3830 KB  
Article
Improving Quay Crane Productivity and Delay Management in Conventional Container Terminals Using Artificial Intelligence Tools
by George-Cosmin Partene, Florin Nicolae, Florin Postolache and Sorin Ionescu
J. Mar. Sci. Eng. 2026, 14(8), 749; https://doi.org/10.3390/jmse14080749 - 19 Apr 2026
Viewed by 156
Abstract
This study proposes an integrated artificial intelligence-based framework for modeling and predicting quay crane productivity and operational delays in conventional container terminals, addressing key limitations in the existing port analytics literature. The research introduces a novel dual-mode machine learning architecture that explicitly separates [...] Read more.
This study proposes an integrated artificial intelligence-based framework for modeling and predicting quay crane productivity and operational delays in conventional container terminals, addressing key limitations in the existing port analytics literature. The research introduces a novel dual-mode machine learning architecture that explicitly separates retrospective prediction (forecast mode) from pre-operational decision support (decision mode), addressing a critical gap in existing literature where predictive models are rarely aligned with real-world informational constraints. The framework is applied to a high-resolution, real-world dataset comprising ship-level operations over a three-year period (2023–2025), incorporating a structured representation of 27 delay types and multiple resource allocation variables. A multi-indicator modeling strategy is employed, simultaneously analyzing four productivity metrics (RQCP, GMPH, WBMPH and NMPH), thus allowing for a systematic comparison of their structural sensitivities to delays, congestion, and equipment utilization. The results reveal a clear hierarchy of predictability and operational behavior: structurally driven indicators such as RQCP and GMPH exhibit high predictive stability, while delay-sensitive indicators such as NMPH display greater variability, reflecting real-time operational disruptions. The consistent model performance in forecasting and decision-making indicates significant predictive value in pre-operational variables, endorsing its utility for uncertain decision-making. Sensitivity analysis reveals a critical nonlinear congestion threshold affecting predictive accuracy under extreme operational strain. Employing a combination of multi-indicator productivity modeling, structured delay classification, and ensemble learning within an integrated analytical framework, this research enhances both methodological and practical insights into port operations, aiding in merging predictive analytics with operational decision-making in container terminals to enhance resource allocation, delay handling, and container terminal efficiency. Full article
28 pages, 1083 KB  
Review
Molecular Biomarkers of Training Responses: A Systems Framework for Exercise Adaptation and Athlete Monitoring
by Dan Cristian Mănescu, Andreea Voinea, Camelia Daniela Plastoi, Alexandra Reta Iacobini, Alina Anca Vulpe, Ancuța Pîrvan, Corina Claudia Dinciu, Bogdan Iulian Vulpe, Cristian Băltărețu and Adrian Iacobini
Int. J. Mol. Sci. 2026, 27(8), 3601; https://doi.org/10.3390/ijms27083601 - 17 Apr 2026
Viewed by 252
Abstract
Exercise adaptation depends on overload that is resolved by recovery, yet the same biology becomes maladaptive when immune, endocrine, metabolic, and muscle-centered stress signals fail to normalize. Exercise-induced maladaptation represents a systems-level failure of biological resolution, with direct relevance to disease-like dysregulation. Functional [...] Read more.
Exercise adaptation depends on overload that is resolved by recovery, yet the same biology becomes maladaptive when immune, endocrine, metabolic, and muscle-centered stress signals fail to normalize. Exercise-induced maladaptation represents a systems-level failure of biological resolution, with direct relevance to disease-like dysregulation. Functional overreaching, non-functional overreaching, and overtraining syndrome remain difficult to diagnose because no single biomarker provides adequate specificity, temporal stability, or clinical portability. This narrative review synthesizes human and mechanistic evidence across proteomics, transcriptomics, metabolomics, endocrine profiling, extracellular vesicles, and mitochondrial quality-control biology to define the molecular architecture most relevant to athlete monitoring. Across these layers, the most coherent signatures cluster in immune-acute-phase activation, redox-buffering strain, endocrine drift, altered substrate availability, excitation–contraction dysfunction, integrated stress-response signaling, and defects in autophagy–mitophagy and lysosomal remodeling. Three translational elements emerge from this synthesis: a systems-convergence model of recovery failure, a staged biomarker deployment hierarchy, and a provisional recovery failure index. The practical priority is therefore not a solitary marker, but serial phenotype-anchored multimarker panels that connect circulating signals with muscle-centered biology and support decision-making before prolonged recovery failure becomes entrenched. Full article
(This article belongs to the Special Issue Exercise in Health and Diseases: From the Molecular Perspectives)
Back to TopTop