Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,997)

Search Parameters:
Keywords = formal model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 8782 KB  
Article
Craft as Pedagogy in Architectural Production: Labour, Technology and Non-Formal Learning
by Milinda Pathiraja
Soc. Sci. 2026, 15(3), 211; https://doi.org/10.3390/socsci15030211 - 23 Mar 2026
Abstract
In rapidly urbanising developing economies, construction activity frequently relies on informal and semi-skilled labour. This coincides with limited opportunities for systematic skill development, leading to persistent labour deskilling. While existing research has predominantly addressed these challenges through policy reform, industrialisation, or efficiency-driven technological [...] Read more.
In rapidly urbanising developing economies, construction activity frequently relies on informal and semi-skilled labour. This coincides with limited opportunities for systematic skill development, leading to persistent labour deskilling. While existing research has predominantly addressed these challenges through policy reform, industrialisation, or efficiency-driven technological models, less emphasis has been placed on the role of architectural design in shaping labour–technology relations on-site. This article adopts a constructivist perspective on technology to investigate how architectural design can serve as a socio-technical framework for non-formal labour upskilling within construction practice. Drawing upon qualitative case studies of two architectural projects in Sri Lanka—a suburban residential retrofit and a low-income rural housing prototype—this study analyses how design strategies such as systemisation, construction sequencing, material hybridity, and craft-based component detailing embed tacit learning within production processes. The findings demonstrate that craft, understood as a mode of tacit knowledge and on-the-job learning rather than as a stylistic or nostalgic response, can facilitate skill acquisition across diverse economic and technical contexts. By repositioning architectural design as an active mediator between technology and labour, this article contributes to debates within construction studies, social sciences, and architectural theory and proposes design-led construction strategies as a context-sensitive alternative to purely policy- or efficiency-driven approaches to labour development. Full article
30 pages, 2054 KB  
Article
Regime-Aware LightGBM for Stock Market Forecasting: A Validated Walk-Forward Framework with Statistical Rigor and Explainable AI Analysis
by Antonio Pagliaro
Electronics 2026, 15(6), 1334; https://doi.org/10.3390/electronics15061334 - 23 Mar 2026
Abstract
Can machine learning generate statistically validated alpha in equity markets while adapting to changing market conditions? This study addresses this question by proposing a regime-aware LightGBM framework conditioned on market regimes detected via a rolling Hidden Markov Model, eliminating look-ahead bias. Backtested on [...] Read more.
Can machine learning generate statistically validated alpha in equity markets while adapting to changing market conditions? This study addresses this question by proposing a regime-aware LightGBM framework conditioned on market regimes detected via a rolling Hidden Markov Model, eliminating look-ahead bias. Backtested on 51 NASDAQ-100 constituents (2015–2026), the strategy achieved a portfolio Sharpe ratio of 1.18 (95% CI: [0.53, 1.84]) and outperformed four baseline models. The key findings include the following: (i) cross-asset features (Bitcoin as a leading indicator) contribute the most predictive value; (ii) macroeconomic indicators outweigh traditional technical indicators for high-beta stocks; (iii) the model autonomously adapts its decision logic across regimes, shifting from mean reversion in bear markets to risk appetite monitoring in bull markets. While block bootstrap tests confirm statistical significance (p<0.001), the Deflated Sharpe Ratio (0.69) does not reach formal significance after multiple testing correction—an honest finding we report transparently. Full article
(This article belongs to the Special Issue Machine/Deep Learning Applications and Intelligent Systems)
Show Figures

Figure 1

27 pages, 2450 KB  
Article
Integrated Management of the Urban Water Cycle: A Synthesis of Impacts and Solutions from Source to Tap
by Nicolae Marcoie, Elena Iliesi, András-István Barta, Irina Raboșapca, Daniel Toma, Valentin Boboc, Cătălin-Dumitrel Balan and Bogdan-Marian Tofănică
Urban Sci. 2026, 10(3), 175; https://doi.org/10.3390/urbansci10030175 - 23 Mar 2026
Abstract
Urbanization fundamentally fractures the natural water cycle, leading to a cascade of interconnected problems including increased flood risk, degraded water quality, stressed groundwater resources, and inefficient distribution networks. Traditional, fragmented management approaches that address these issues in isolation have proven inadequate. This research [...] Read more.
Urbanization fundamentally fractures the natural water cycle, leading to a cascade of interconnected problems including increased flood risk, degraded water quality, stressed groundwater resources, and inefficient distribution networks. Traditional, fragmented management approaches that address these issues in isolation have proven inadequate. This research argues for a paradigm shift towards an Integrated Urban Water Management (IUWM) framework anchored in the concept of the “river-aquifer-pipe network continuum”, treating these components as a single, dynamic hydrological and infrastructural entity. Drawing upon a series of detailed case studies from Eastern Romania, this paper synthesizes the systemic impacts of development across the entire urban water system. Evidence from the Prut, Olt, and Bahlui river basins demonstrate how channelization exacerbates flood peaks and leads to severe biochemical degradation. Hydrogeological modeling of the Gherăești-Bacău wellfield reveals the vulnerabilities of over-extraction, while analysis of the Iași water network highlights the challenge of water losses in the aging infrastructure. In response, a modern, multi-tool approach is consolidated into a practical, three-stage framework for action: Diagnose, Prescribe, and Optimize. This framework advocates for (1) a comprehensive diagnosis using a suite of predictive numerical models (a “digital twin”); (2) the prescription of foundational, nature-based solutions, such as floodplain restoration, to heal core ecological functions; and (3) the continuous optimization of engineered infrastructure using smart, real-time control technologies. The synthesis concludes that an integrated, data-driven, and collaborative approach is the only sustainable path forward. Future research should focus on formally coupling these diagnostic models to create true Digital Twins of urban water systems—an essential step towards building resilient, water-secure cities for the 21st century. Full article
(This article belongs to the Special Issue Water Resources Planning and Management in Cities (2nd Edition))
Show Figures

Graphical abstract

29 pages, 7118 KB  
Article
Improving Document Layout Analysis Using Synthetic Data Generation and Convolutional Models
by Olha Pronina, Tao Xia, Kyrylo Sheliah, Olena Piatykop, Vasily Efremenko and Elena Balalayeva
Appl. Sci. 2026, 16(6), 3089; https://doi.org/10.3390/app16063089 - 23 Mar 2026
Abstract
Document Layout Analysis (DLA) is a critical step in intelligent document processing and is essential for accurately reconstructing the hierarchical structure of pages. While modern convolutional neural networks exhibit high performance, their effectiveness heavily depends on the quality and representativeness of training data, [...] Read more.
Document Layout Analysis (DLA) is a critical step in intelligent document processing and is essential for accurately reconstructing the hierarchical structure of pages. While modern convolutional neural networks exhibit high performance, their effectiveness heavily depends on the quality and representativeness of training data, limiting their application in scenarios where labeled datasets are scarce. This paper proposes a method for enhancing DLA through synthetic generation of training data. A formalized mathematical model for generating document layouts has been developed, allowing control over element placement density, sizes, and spatial distribution. An experimental study investigated the impact of various data generation strategies on the training of the YOLO11m model, including median and threshold-based element splitting as well as different block sampling schemes. The experiments showed that employing median element splitting combined with random sampling from a large shuffled pool of synthetic data yields consistent improvements of 2–4% across all key metrics: precision, recall, mAP@50, and mAP@50:95, as compared with simple data generation strategies. These results demonstrate that targeted optimization of the data preparation process can enhance the performance of convolutional models in DLA tasks without increasing architectural complexity. The practical applicability of the method is validated through integration into the MinerU system. Future research will focus on extending the proposed model to complex layouts in scientific journals, technical reports, and handwritten documents. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

44 pages, 2527 KB  
Article
Managing Uncertainty and Information Dynamics with Graphics-Enhanced TOGAF Architecture in Higher Education
by A’aeshah Alhakamy
Entropy 2026, 28(3), 361; https://doi.org/10.3390/e28030361 - 22 Mar 2026
Abstract
Adaptive learning at scale requires explicit handling of uncertainty and information flow across diverse educational technologies. This paper proposes a TOGAF-conformant enterprise architecture for the University of Tabuk (UT) that embeds entropy- and uncertainty-aware requirements from the outset and aligns them with institutional [...] Read more.
Adaptive learning at scale requires explicit handling of uncertainty and information flow across diverse educational technologies. This paper proposes a TOGAF-conformant enterprise architecture for the University of Tabuk (UT) that embeds entropy- and uncertainty-aware requirements from the outset and aligns them with institutional goals in teaching, research, and administration. Using the Architecture Development Method (ADM), we map information-theoretic requirements to architectural artifacts across the architecture vision, business, information systems, and technology domains; formally specify core entropy-informed observables, including predictive entropy, expected information gain, workflow variability entropy, and uncertainty hot-spot severity; and define semantic and metadata standards for their near-real-time computation. These indicators are positioned explicitly across the TOGAF domains: business architecture identifies where uncertainty matters, information systems architecture defines the computable data and application representations, technology architecture operationalizes secure and scalable computation, and later ADM phases use the resulting metrics for prioritization and governance. The architecture also establishes governance that ranks initiatives by their expected uncertainty reduction through Architecture Review Board (ARB) decision gates. We address three research questions: (R.Q.1) how to design a TOGAF-conformant architecture for UT that natively encodes uncertainty-aware requirements and aligns with institutional needs; (R.Q.2) how to integrate dispersed data, achieve semantic harmonization, and deliver analytics-ready streams that support information-theoretic indicators for personalization without delay; and (R.Q.3) how to embed IT demand planning in opportunities and solutions and migration planning using uncertainty reduction and expected information gain as prioritization criteria. The resulting architecture offers a university-wide foundation for adaptive learning: it unifies learner and system interaction data under governed schemas, supports low-latency analytics, and formalizes decision processes that treat uncertainty as a primary metric. Though learner-level operational validation is future work, the design establishes the technical and organizational foundations for responsible, large-scale deployment of entropy-driven learner modeling, content sequencing, and feedback optimization. Full article
Show Figures

Figure 1

31 pages, 629 KB  
Article
The One-Parameter Bounded p-Exponential Distribution: Properties, Inference, and Applications
by Hassan S. Bakouch, Hugo S. Salinas, Fernando A. Moala, Tassaddaq Hussain, Shaykhah Aldossari and Alanwood Al-Buainain
Mathematics 2026, 14(6), 1076; https://doi.org/10.3390/math14061076 - 22 Mar 2026
Abstract
We introduce the one-parameter bounded p-exponential distribution on (0, p+1), which includes the uniform model as a special case and converges pointwise to the exponential law as p. Closed-form expressions are derived [...] Read more.
We introduce the one-parameter bounded p-exponential distribution on (0, p+1), which includes the uniform model as a special case and converges pointwise to the exponential law as p. Closed-form expressions are derived for the CDF and PDF, the survival function, an explicit increasing-failure-rate hazard function, the quantile function (enabling inversion-based simulation), moments, and entropy, along with a constructive scaled beta or Kumaraswamy representation. We also establish stochastic ordering with respect to p in stop-loss and increasing convex order, formalizing how dispersion varies with the parameter while preserving the mean scale. Inference is discussed under parameter-dependent support, a non-regular setting, and we develop and compare several estimation procedures, including a likelihood-based boundary MLE, a variance-matching method-of-moments estimator, and Bayesian estimation under a gamma prior implemented via numerical quadrature or MCMC. Monte Carlo simulation studies evaluate finite-sample performance and interval behavior, and two real-world applications in survival and reliability analysis illustrate competitive goodness-of-fit relative to standard benchmark models. Full article
(This article belongs to the Special Issue New Advances in Mathematical Applications for Reliability Analysis)
Show Figures

Figure 1

22 pages, 12914 KB  
Article
Distribution-Preserving Latent Image Steganography via Conditional Optimal Transport and Theoretical Target Synthesis
by Kamil Woźniak, Marek R. Ogiela and Lidia Ogiela
Electronics 2026, 15(6), 1321; https://doi.org/10.3390/electronics15061321 - 22 Mar 2026
Abstract
We propose Distribution-Preserving Latent Steganography via Conditional Optimal Transport (DPL-COT), a coverless image steganography framework for latent diffusion models. Unlike classical cover-modifying schemes, DPL-COT embeds a bitstream directly into the initialization noise latent zTN(0,I) without [...] Read more.
We propose Distribution-Preserving Latent Steganography via Conditional Optimal Transport (DPL-COT), a coverless image steganography framework for latent diffusion models. Unlike classical cover-modifying schemes, DPL-COT embeds a bitstream directly into the initialization noise latent zTN(0,I) without model retraining. Our primary objective is high recoverability and a low bit error rate (BER) under deterministic inversion, which is inherently imperfect due to numerical discretization and VAE nonlinearity. To maximize decoding stability, we restrict embedding to the natural tails of the latent prior by selecting the largest-magnitude coordinates, thereby increasing the sign decision margin against inversion drift. To preserve distributional stealth, per-bit target values are analytically derived from truncated Gaussians matching the marginal distribution of the selected coordinates. Conditional 1D optimal transport is applied independently for each bit class, mapping every coordinate to its target value while preserving rank order. We generate 5000 stego images using a pretrained diffusion model and demonstrate a favorable capacity–reliability trade-off (e.g., 4916 bits/image with 0.473% mean BER) and strong robustness to JPEG compression (sub-1% mean BER at Q=60). Compared with LDStega, a recent LDM-based scheme reporting 99.28% clean-channel accuracy, DPL-COT achieves 99.53% at a comparable operating point and sustains above-99% accuracy under all tested JPEG quality factors. Latent-space tests further confirm negligible cover–stego distribution shift (mean KS2<0.003, mean W1<0.003), a property not formally addressed by prior methods. Full article
17 pages, 1867 KB  
Article
The Kaiona Framework: Centering Hawaiian and Pasifika Community in Defining, Measuring, and Promoting Health and Well-Being
by Kenny S. Ferenchak, Blane K. Garcia, J. Kukui Maunakea-Forth, Chelsey V. Jay, Isaiah Pule, Eric Enos, Kay L. Fukuda, Asia Engle, C. Kamalani Cruz, Myna Keleb, Angelica Raza-Furtado, Alika Spahn Naihe, Andrew Aoki, Faith Ewaliko, Uʻilani O. N. Schnackenberg, Kevin M. C. D. Akiyama, Ariel Makana Panui, Kyle Kaliko Chang and May Okihiro
Int. J. Environ. Res. Public Health 2026, 23(3), 402; https://doi.org/10.3390/ijerph23030402 - 22 Mar 2026
Abstract
The place and people of Waiʻanae, Hawaiʻi, are rich in connection with ʻāina (natural environment) and culture. Counter to this strengths-based approach, metrics and narratives imposed by outside systems assess many communities like ours as “sick,” “poor,” or “unwell.” This paper details [...] Read more.
The place and people of Waiʻanae, Hawaiʻi, are rich in connection with ʻāina (natural environment) and culture. Counter to this strengths-based approach, metrics and narratives imposed by outside systems assess many communities like ours as “sick,” “poor,” or “unwell.” This paper details our community’s approach to defining “well-being” around the values specific to our place, overseen by a council of community leaders with decades of experience supporting youth. The development was a mixed methods process including formal focus groups, informal community conversations, review of existing models, and collaboration with a professional artist. Centering community was the priority through each phase, engaging youth, parents, cultural practitioners, healthcare providers, and educators. Our community built the Kaiona Framework around the moʻolelo (traditional story) of Kaiona who helps the lost find home through empathy and compassion. Well-being is grounded in connection to, in relationship with, and in service to ʻāina. The child is at the center of our work, but inseparable from the family, community, and wider nation of people. Wellness comprises four values vital to our community: mauli ola, a balanced state of physical, mental, emotional, spiritual, and environmental health; waiwai, abundance and prosperity; pilina, mutually sustaining relationships; and ea, self-determination and agency. Full article
24 pages, 1238 KB  
Article
Integration of Interval Temporal Logic in an Expression Evaluator
by Francisco Morero-Peyrona, Javier Gutiérrez-Rodríguez and Manuel Mejías-Risoto
Appl. Sci. 2026, 16(6), 3049; https://doi.org/10.3390/app16063049 - 21 Mar 2026
Abstract
This paper presents NAXE, an expression evaluator framework for concurrent, resilient, and temporally aware applications, particularly in IoT domains. The key contribution is the principled integration of Interval Temporal Logic as a first-class feature alongside standard arithmetic and Boolean operations. To enhance robustness, [...] Read more.
This paper presents NAXE, an expression evaluator framework for concurrent, resilient, and temporally aware applications, particularly in IoT domains. The key contribution is the principled integration of Interval Temporal Logic as a first-class feature alongside standard arithmetic and Boolean operations. To enhance robustness, bidirectional lazy evaluation extends short-circuit semantics to yield determinate results even with indeterminate operands. A left-to-right chaining syntax and Quantified Expressions reduce cognitive load and improve accessibility, particularly for non-programmers. The framework’s asynchronous evaluation model uses callbacks over an Abstract Syntax Tree with formal operational semantics specified through EBNF. Validation combines (i) formal correctness proofs, (ii) empirical validation (over 300 unit tests), (iii) real-world deployment in a hotel IoT system and (iv) a pilot study (n = 36). This integration advances expression evaluation by supporting interval-temporal operators, determinate outcomes under indeterminate operands, quantifiers and user-oriented syntax in a single expression-evaluation design. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 2031 KB  
Article
Heritage 4.0: How Applied 3D Technologies and Digital Twins Are Redefining Cultural Preservation Beyond Replication
by Antreas Kantaros, Theodore Ganetsos, Stavroula Nakou and Nikolaos Laskaris
Heritage 2026, 9(3), 123; https://doi.org/10.3390/heritage9030123 - 21 Mar 2026
Abstract
This work examines how digital technologies, particularly 3D imaging, additive man-ufacturing, and digital twins, contribute to a more interactive and process-oriented understanding of cultural preservation. Building on practical experience with museum scanning and 3D reproduction, the study introduces the Heritage 4.0 Cycle, a [...] Read more.
This work examines how digital technologies, particularly 3D imaging, additive man-ufacturing, and digital twins, contribute to a more interactive and process-oriented understanding of cultural preservation. Building on practical experience with museum scanning and 3D reproduction, the study introduces the Heritage 4.0 Cycle, a conceptual framework that structures digital heritage management into four iterative phases: Capture, Curate, Connect, and Co-create. The model integrates technological, ethical, and social aspects of preservation, describing how cultural heritage operates as a living system supported by data, interpretation, and participation. Findings indicate that 3D technologies function as mediators between tangible and intangible heritage, promoting inclusivity, collaborative learning, and sustainable engagement. The framework aligns digital preservation practices with broader objectives of education, innovation, and community development. By formalizing Heritage 4.0 into a structured and iterative framework, this study contributes a transferable model that supports sustainable and smart cultural ecosystems by aligning digital documentation, ethical curation, participatory engagement, and digital twin-enabled connectivity within a coherent heritage management strategy Full article
Show Figures

Figure 1

26 pages, 887 KB  
Article
Using Safety Accountability to Enhance Construction Safety Performance: The Mediating Roles of Safety Monitoring and Safety Learning Under Inclusive Leadership
by Mohamed Mohamed and Benard Vetbuje
Buildings 2026, 16(6), 1244; https://doi.org/10.3390/buildings16061244 - 21 Mar 2026
Abstract
Safety performance remains a persistent challenge in the construction industry due to hazardous working conditions, dynamic site environments, and complex organizational structures. Despite regulatory advances and technical safety controls, accident rates remain high, suggesting that formal mechanisms alone are insufficient. Addressing this gap, [...] Read more.
Safety performance remains a persistent challenge in the construction industry due to hazardous working conditions, dynamic site environments, and complex organizational structures. Despite regulatory advances and technical safety controls, accident rates remain high, suggesting that formal mechanisms alone are insufficient. Addressing this gap, this study examines safety accountability as a central organizational mechanism and investigates how it influences construction workers’ safety performance through behavioral processes and leadership conditions. Drawing on accountability theory and social learning theory, we propose a moderated parallel mediation model in which safety monitoring and safety learning function as mediators, while inclusive leadership behavior serves as a contextual moderator. Data were collected from 629 construction workers employed in large-scale projects in Istanbul and Ankara, Türkiye, using a two-wave survey design to mitigate common method bias. Hypotheses were tested using confirmatory factor analysis and Hayes’ PROCESS macro. The results indicate that safety accountability does not exert a significant direct effect on safety performance; rather, its influence is fully transmitted through safety monitoring and safety learning, with monitoring emerging as the stronger mediating mechanism. Moreover, inclusive leadership behavior significantly strengthens the accountability-driven pathways leading to improved safety outcomes. By integrating accountability structures, behavioral processes, and leadership context, this study advances construction safety research and provides evidence-based guidance for enhancing occupational safety performance in high-risk construction environments. Full article
(This article belongs to the Special Issue Safety Management and Occupational Health in Construction)
Show Figures

Figure 1

35 pages, 743 KB  
Systematic Review
Affective Intelligent Systems in Healthcare: A Systematic Review
by Analúcia Schiaffino Morales, Thiago de Luca Reis, Alison R. Panisson, Fabrício Ourique and Iwens G. Sene
Technologies 2026, 14(3), 188; https://doi.org/10.3390/technologies14030188 - 20 Mar 2026
Abstract
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A [...] Read more.
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A total of 170 peer-reviewed articles were selected from PubMed, IEEE Xplore, Scopus, and Web of Science based on predefined inclusion and exclusion criteria, with the sample restricted to full-text studies in English addressing affective computing in healthcare. No formal risk-of-bias tool was applied due to the computational nature of the studies, and the findings were synthesized descriptively. Discussion: The findings reveal a clear shift from classical machine learning (e.g., SVM, k-NN) toward deep learning and hybrid architectures such as CNN–LSTM and attention-based models for processing complex physiological signals. Recent years have shown a growing interest in multimodal data fusion and privacy-preserving mechanisms such as homomorphic encryption. Evidence remains limited by methodological heterogeneity and inconsistent reporting across studies. A significant gap persists in regulatory compliance, as 57% of recent publications do not adequately address data security or ethical risks associated with sensitive biometric footprints. Conclusions: Although affective computing has reached a certain level of technical maturity, future research must prioritize lightweight, secure, and privacy-by-design architectures to enable ethically aligned and trustworthy deployment in real-world healthcare scenarios. Full article
Show Figures

Figure 1

15 pages, 671 KB  
Article
Model Checking in Federated Learning-Based Smart Advertising
by Rasool Seyghaly, Jordi Garcia and Xavi Masip-Bruin
J. Sens. Actuator Netw. 2026, 15(2), 29; https://doi.org/10.3390/jsan15020029 - 20 Mar 2026
Abstract
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. [...] Read more.
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. However, ensuring the correctness, reliability, and operational robustness of FL-driven smart advertising systems remains a significant challenge, particularly in distributed and user-facing environments. In this study, we investigate the use of model checking as a formal verification technique for validating key properties of an FL-based smart advertising workflow in social networks. We combine a structured finite-state modeling approach with Linear Temporal Logic (LTL) specifications and model-checking tools to assess correctness, availability, and baseline privacy requirements. Using controlled simulation-based configurations, we show that, for a setup with 100 users and 20 edge servers, the system delivers advertisements to all users and the global model successfully processes 200 out of 200 requests. We further analyze verification overhead through detection-time measurements, observing an increase in average detection time from 10.05 s to 11.98 s as the number of users rises from 20 to 100. These results indicate that the proposed framework can provide practical assurance for FL-enabled smart advertising workflows, support more reliable deployment in distributed intelligent systems, and improve trustworthiness in real advertising applications. Full article
Show Figures

Graphical abstract

26 pages, 7401 KB  
Article
Local Knowledge Mining of Architectural Heritage Semantic Fragments Based on Knowledge Graph Alignment
by Qifan Yao, Jingheng Chen and Yingran Qu
Buildings 2026, 16(6), 1233; https://doi.org/10.3390/buildings16061233 - 20 Mar 2026
Abstract
In the field of digital architectural heritage, the mining of tacit local knowledge embedded in architectural heritage is considered essential for the preservation, inheritance, and application of regional architectural characteristics. Local knowledge can be formally represented through semantic models, by which the automated [...] Read more.
In the field of digital architectural heritage, the mining of tacit local knowledge embedded in architectural heritage is considered essential for the preservation, inheritance, and application of regional architectural characteristics. Local knowledge can be formally represented through semantic models, by which the automated mining of tacit information can be facilitated. However, due to the incomplete preservation of physical buildings and the fragmented nature of historical records, local knowledge is often represented as semantic fragments. Consequently, existing semantic models are still challenged in terms of knowledge integration and reasoning. In this study, a knowledge graph was developed for representing local knowledge, in which fragmented local semantics were aligned at both the ontological and entity levels. Subsequently, implicit local knowledge mining is achieved through meta-path centrality propagation combined with expert evaluation on a graph visualization platform. The method was applied to eight historical buildings in a case study. The knowledge graph quality assessment results indicate excellent ontology utilization and property utilization. The knowledge mining results demonstrate that graph-based expert evaluation successfully enables knowledge Feature Ranking and knowledge Extinction Warning. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

24 pages, 427 KB  
Review
A Survey on Recent Advances in the Integration of Discrete Event Systems and Artificial Intelligence
by Jie Ren, Ruotian Liu, Agostino Marcello Mangini and Maria Pia Fanti
Appl. Sci. 2026, 16(6), 3000; https://doi.org/10.3390/app16063000 - 20 Mar 2026
Abstract
The increasing complexity and uncertain system of modern discrete event system (DES) challenge traditional model-based control approaches, while artificial intelligence (AI) techniques offer powerful data-driven decision-making capabilities but lack formal guarantees. This review surveys recent research on the integration of AI with DES [...] Read more.
The increasing complexity and uncertain system of modern discrete event system (DES) challenge traditional model-based control approaches, while artificial intelligence (AI) techniques offer powerful data-driven decision-making capabilities but lack formal guarantees. This review surveys recent research on the integration of AI with DES and supervisory control theory. Following a systematic literature mapping methodology, the literature is organized using a taxonomy based on three orthogonal perspectives: control and decision paradigm, system capability and property, and application and operational objectives. The review highlights how learning-based methods enhance adaptability and performance in DES, while also exposing persistent challenges related to safety, nonblocking behavior, data efficiency, and interpretability. By structuring existing approaches and identifying open issues, this review provides a coherent overview of the current research landscape and outlines key directions for future work on AI-enabled DES. Full article
(This article belongs to the Special Issue Modeling and Control of Discrete Event Systems)
Show Figures

Figure 1

Back to TopTop