Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,986)

Search Parameters:
Keywords = rule construction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4350 KB  
Article
Reduced-Order Legendre–Galerkin Extrapolation Method with Scalar Auxiliary Variable for Time-Fractional Allen–Cahn Equation
by Chunxia Huang, Hong Li and Baoli Yin
Fractal Fract. 2026, 10(2), 83; https://doi.org/10.3390/fractalfract10020083 - 26 Jan 2026
Abstract
This paper presents a reduced-order Legendre–Galerkin extrapolation (ROLGE) method combined with the scalar auxiliary variable (SAV) approach (ROLGE-SAV) to numerically solve the time-fractional Allen–Cahn equation (tFAC). First, the nonlinear term is linearized via the SAV method, and the linearized system derived from this [...] Read more.
This paper presents a reduced-order Legendre–Galerkin extrapolation (ROLGE) method combined with the scalar auxiliary variable (SAV) approach (ROLGE-SAV) to numerically solve the time-fractional Allen–Cahn equation (tFAC). First, the nonlinear term is linearized via the SAV method, and the linearized system derived from this SAV-based linearization is time-discretized using the shifted fractional trapezoidal rule (SFTR), resulting in a semi-discrete unconditionally stable scheme (SFTR-SAV). The scheme is then fully discretized by incorporating Legendre–Galerkin (LG) spatial discretization. To enhance computational efficiency, a proper orthogonal decomposition (POD) basis is constructed from a small set of snapshots of the fully discrete solutions on an initial short time interval. A reduced-order LG extrapolation SFTR-SAV model (ROLGE-SFTR-SAV) is then implemented over a subsequent extended time interval, thereby avoiding redundant computations. Theoretical analysis establishes the stability of the reduced-order scheme and provides its error estimates. Numerical experiments validate the effectiveness of the proposed method and the correctness of the theoretical results. Full article
(This article belongs to the Section Numerical and Computational Methods)
Show Figures

Figure 1

31 pages, 706 KB  
Article
Applying Action Research to Developing a GPT-Based Assistant for Construction Cost Code Verification in State-Funded Projects in Vietnam
by Quan T. Nguyen, Thuy-Binh Pham, Hai Phong Bui and Po-Han Chen
Buildings 2026, 16(3), 499; https://doi.org/10.3390/buildings16030499 - 26 Jan 2026
Abstract
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based [...] Read more.
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based assistant designed to automate Vietnam’s regulatory verification. The assistant was developed and iteratively refined across four Action Research cycles. Also, the system enforces strict rule sequencing and dataset grounding via Python-governed computations. Rather than relying on probabilistic or semantic reasoning, the system performs strictly deterministic checks on code validity, UoM alignment, and unit price conformity in material (MTR), labor (LBR), and machinery (MCR), given the provincial unit price books (UPBs). Deterministic equality is evaluated either on raw numerical values or on values transformed through explicitly declared, rule-governed operations, preserving auditability without introducing tolerance-based or inferential reasoning. A dedicated exact-match mechanism, which is activated only when a code is invalid, enables the recovery of typographical errors only when a project item’s full price vector well matches a normative entry. Using twenty real construction estimates (16,100 rows) and twelve controlled error-injection cases, the study demonstrates that the assistant executes verification steps with high reliability across diverse spreadsheet structures, avoiding ambiguity and maintaining full auditability. Deterministic extraction and normalization routines facilitate robust handling of displaced headers, merged cells, and non-standard labeling, while structured reporting provides line-by-line traceability aligned with professional verification workflows. Practitioner feedback confirms that the system reduces manual tracing effort, improves evaluation consistency, and supports documentation compliance with human judgment. This research contributes a framework for large language model (LLM)-orchestrated verification, demonstrating how Action Research can align AI tools with domain expectations. Furthermore, it establishes a methodology for deploying LLMs in safety-critical and regulation-driven environments. Limitations—including narrow diagnostic scope, unlisted quotation exclusion, single-province UPB compliance, and sensitivity to extreme spreadsheet irregularities—define directions for future deterministic extensions. Overall, the findings illustrate how tightly constrained LLM configurations can augment, rather than replace, professional cost verification practices in public-sector construction. Full article
(This article belongs to the Special Issue Knowledge Management in the Building and Construction Industry)
Show Figures

Figure 1

30 pages, 6189 KB  
Article
Shear Properties and User Material Modelling of Sandwich Panel Cores for Marine Structures
by Davor Bolf, Albert Zamarin, Marino Brčić and Domagoj Vrtovšnik
Appl. Sci. 2026, 16(2), 1141; https://doi.org/10.3390/app16021141 - 22 Jan 2026
Viewed by 40
Abstract
In the design of marine structures, structural design remains a critical activity, heavily guided by classification society rules that define dimensions of the structural elements. With the development of increasingly complex structures and the increasing use of composite materials in ship structural design, [...] Read more.
In the design of marine structures, structural design remains a critical activity, heavily guided by classification society rules that define dimensions of the structural elements. With the development of increasingly complex structures and the increasing use of composite materials in ship structural design, accurate knowledge of material properties, particularly those of sandwich panel cores, is essential for direct calculations. This article presents experimental data and numerical analysis results for the PVC core of sandwich panels, the selection of appropriate material models from the LS-DYNA standard material database, and the development of a user-defined material model to accurately capture the physical behaviour of foam cores in sandwich constructions. The comprehensive dataset obtained from polymer foam tests is made publicly available to support future structural calculations. Full article
Show Figures

Figure 1

23 pages, 426 KB  
Article
Creating Dialogic Spaces in STEM Education: A Comparative Study of Ground Rules
by Imogen Casebourne, Nigel Calder, Kevin Martin, Kate Rhodes and Cynthia James
Educ. Sci. 2026, 16(1), 165; https://doi.org/10.3390/educsci16010165 - 21 Jan 2026
Viewed by 94
Abstract
This article reports on a comparative case study that examined the ground rules used to facilitate a dialogic space in two discrete and diverse research studies: Year 5 & 6 children learning to code with ScratchMaths as part of their mathematics programmes, and [...] Read more.
This article reports on a comparative case study that examined the ground rules used to facilitate a dialogic space in two discrete and diverse research studies: Year 5 & 6 children learning to code with ScratchMaths as part of their mathematics programmes, and crop farmers in rural east Africa developing their practice through various communications. The intention was to see if there were common actions or principles important for the establishment of ground rules in dialogic spaces in general. Understanding the nature of dialogic space has become increasingly important in many areas of education. STEM subjects, particularly when integrated, frequently involve collaborative interaction, and utilise a dialogical approach. Some initial aspects of ground rules were collaboratively identified, with both studies then independently analysed to identify emerging themes related to these ground rules. Several key elements emerged: developing the processes for interaction and communication; developing trust between participants; developing respectful dialogue; teacher roles; and facilitating collaborative work and the co-construction of meaning. The comparative case study suggested that these were important for other education work when establishing dialogic space. Full article
Show Figures

Figure 1

24 pages, 1137 KB  
Article
Detecting TLS Protocol Anomalies Through Network Monitoring and Compliance Tools
by Diana Gratiela Berbecaru and Marco De Santo
Future Internet 2026, 18(1), 62; https://doi.org/10.3390/fi18010062 - 21 Jan 2026
Viewed by 52
Abstract
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, [...] Read more.
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, the protocol is typically implemented through libraries, such as OpenSSL, BoringSSL, LibreSSL, WolfSSL, NSS, or mbedTLS. These libraries encompass functions that execute the specialized TLS handshake required for channel establishment, as well as the construction and processing of TLS records, and the procedures for closing the secure channel. However, these software libraries may contain vulnerabilities or errors that could potentially jeopardize the security of the TLS channel. To identify flaws or deviations from established standards within the implemented TLS code, a specialized tool known as TLS-Anvil can be utilized. This tool also verifies the compliance of TLS libraries with the specifications outlined in the Request for Comments documents published by the IETF. TLS-Anvil conducts numerous tests with a client/server configuration utilizing a specified TLS library and subsequently generates a report that details the number of successful tests. In this work, we exploit the results obtained from a selected subset of TLS-Anvil tests to generate rules used for anomaly detection in Suricata, a well-known signature-based Intrusion Detection System. During the tests, TLS-Anvil generates .pcap capture files that report all the messages exchanged. Such files can be subsequently analyzed with Wireshark, allowing for a detailed examination of the messages exchanged during the tests and a thorough understanding of their structure on a byte-by-byte basis. Through the analysis of the TLS handshake messages produced during testing, we develop customized Suricata rules aimed at detecting TLS anomalies that result from flawed implementations within the intercepted traffic. Furthermore, we describe the specific test environment established for the purpose of deriving and validating certain Suricata rules intended to identify anomalies in nodes utilizing a version of the OpenSSL library that does not conform to the TLS specification. The rules that delineate TLS deviations or potential attacks may subsequently be integrated into a threat detection platform supporting Suricata. This integration will enhance the capability to identify TLS anomalies arising from code that fails to adhere to the established specifications. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

34 pages, 8353 KB  
Article
Scheduling of the Automated Sub-Assembly Welding Line Based on Improved Two-Layer Fruit Fly Optimization Algorithm
by Wenlin Xiao and Zhongqin Lin
Appl. Sci. 2026, 16(2), 1085; https://doi.org/10.3390/app16021085 - 21 Jan 2026
Viewed by 73
Abstract
Faced with the contradiction between the increasingly growing demand and labor-intensive manufacturing modes, in the current era of rapid development of informatization and artificial intelligence, improving manufacturing efficiency by means of automated manufacturing equipment has become a recognized development direction for most shipyards. [...] Read more.
Faced with the contradiction between the increasingly growing demand and labor-intensive manufacturing modes, in the current era of rapid development of informatization and artificial intelligence, improving manufacturing efficiency by means of automated manufacturing equipment has become a recognized development direction for most shipyards. This trend is particularly evident in the manufacturing of sub-assemblies, which are the smallest composite units of the hull. Taking an automated sub-assembly welding line in a shipyard as the research object, this paper constructs a mathematical model aimed at optimizing production efficiency based on the analysis of its operational processes and characteristics and proposes an improved two-layer fruit fly optimization algorithm (ITLFOA) for solving the automated sub-assembly welding line scheduling problem (ASWLSP). The proposed ITLFOA features a two-layer nested algorithm structure, with several key improvements proposed for both optimization layers, such as heuristic rules for spatial layout, improved neighborhood operators, an added disturbance mechanism, and an added population diversity restoration mechanism. Finally, the performance of ITLFOA is validated through a comparative analysis against the initial two-layer fruit fly optimization algorithm (initial TLFOA), the well-established Variable Neighborhood Search (VNS) algorithm and the actual manual operation results on a specific case of a shipyard. Full article
(This article belongs to the Special Issue Advances in AI and Optimization for Scheduling Problems in Industry)
Show Figures

Figure 1

22 pages, 1714 KB  
Article
Integrating Machine-Learning Methods with Importance–Performance Maps to Evaluate Drivers for the Acceptance of New Vaccines: Application to AstraZeneca COVID-19 Vaccine
by Jorge de Andrés-Sánchez, Mar Souto-Romero and Mario Arias-Oliva
AI 2026, 7(1), 34; https://doi.org/10.3390/ai7010034 - 21 Jan 2026
Viewed by 118
Abstract
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) [...] Read more.
Background: The acceptance of new vaccines under uncertainty—such as during the COVID-19 pandemic—poses a major public health challenge because efficacy and safety information is still evolving. Methods: We propose an integrative analytical framework that combines a theory-based model of vaccine acceptance—the cognitive–affective–normative (CAN) model—with machine-learning techniques (decision tree regression, random forest, and Extreme Gradient Boosting) and SHapley Additive exPlanations (SHAP) integrated into an importance–performance map (IPM) to prioritize determinants of vaccination intention. Using survey data collected in Spain in September 2020 (N = 600), when the AstraZeneca vaccine had not yet been approved, we examine the roles of perceived efficacy (EF), fear of COVID-19 (FC), fear of the vaccine (FV), and social influence (SI). Results: EF and SI consistently emerged as the most influential determinants across modelling approaches. Ensemble learners (random forest and Extreme Gradient Boosting) achieved stronger out-of-sample predictive performance than the single decision tree, while decision tree regression provided an interpretable, rule-based representation of the main decision pathways. Exploiting the local nature of SHAP values, we also constructed SHAP-based IPMs for the full sample and for the low-acceptance segment, enhancing the policy relevance of the prioritization exercise. Conclusions: By combining theory-driven structural modelling with predictive and explainable machine learning, the proposed framework offers a transparent and replicable tool to support the design of vaccination communication strategies and can be transferred to other settings involving emerging health technologies. Full article
Show Figures

Figure 1

7 pages, 1557 KB  
Proceeding Paper
Allais–Ellsberg Convergent Markov–Network Game
by Adil Ahmad Mughal
Proceedings 2026, 135(1), 2; https://doi.org/10.3390/proceedings2026135002 - 19 Jan 2026
Viewed by 90
Abstract
Behavioral deviations from subjective expected utility theory, most famously captured by the Allais paradox and the Ellsberg paradox, have inspired extensive theoretical and experimental research into risk and ambiguity preferences. While the existing analyze these paradoxes independently, little work explores how such heterogeneously [...] Read more.
Behavioral deviations from subjective expected utility theory, most famously captured by the Allais paradox and the Ellsberg paradox, have inspired extensive theoretical and experimental research into risk and ambiguity preferences. While the existing analyze these paradoxes independently, little work explores how such heterogeneously biased agents interact in networked strategic environments. Our paper fills this gap by modeling a convergent Markov–network game between Allais-type and Ellsberg-type players, each endowed with fully enriched loss matrices that reflect their distinct probabilistic and ambiguity attitudes. We define convergent priors as those inducing a spectral radius of <1 in iterated enriched matrices, ensuring iterative convergence under a matrix-based update rule. Players minimize their losses under these priors in each iteration, converging to an equilibrium where no further updates are feasible. We analyze this convergence under three learning regimes—homophily, heterophily, and type-neutral randomness—each defined via distinct neighborhood learning dynamics. To validate the equilibrium, we construct a risk-neutral measure by transforming losses into payoffs and derive a riskless rate of return representing players’ subjective indifference to risk. This applies risk-neutral pricing logic to behavioral matrices, which is novel. This framework unifies paradox-type decision makers within a networked Markovian environment (stochastic adjacency matrix), extending models of dynamic learning and providing a novel equilibrium characterization for heterogeneous, ambiguity-averse agents in structured interactions. Full article
(This article belongs to the Proceedings of The 1st International Electronic Conference on Games (IECGA 2025))
Show Figures

Figure 1

25 pages, 1436 KB  
Article
Entropy-Augmented Forecasting and Portfolio Construction at the Industry-Group Level: A Causal Machine-Learning Approach Using Gradient-Boosted Decision Trees
by Gil Cohen, Avishay Aiche and Ron Eichel
Entropy 2026, 28(1), 108; https://doi.org/10.3390/e28010108 - 16 Jan 2026
Viewed by 216
Abstract
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy [...] Read more.
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy entropy computed from recent return dynamics. Models are estimated at weekly, monthly, and quarterly horizons using a strictly causal rolling-window design and translated into two economically interpretable allocation rules, a maximum-profit strategy and a minimum-risk strategy. Results show that the top performing strategy, the weekly maximum-profit model augmented with Shannon entropy, achieves an accumulated return exceeding 30,000%, substantially outperforming both the baseline model and the fuzzy-entropy variant. On monthly and quarterly horizons, entropy and fuzzy entropy generate smaller but robust improvements by maintaining lower volatility and better downside protection. Industry allocations display stable and economically interpretable patterns, profit-oriented strategies concentrate primarily in cyclical and growth-sensitive industries such as semiconductors, automobiles, technology hardware, banks, and energy, while minimum-risk strategies consistently favor defensive industries including utilities, food, beverage and tobacco, real estate, and consumer staples. Overall, the results demonstrate that entropy-based complexity measures improve both economic performance and interpretability, yielding industry-rotation strategies that are simultaneously more profitable, more stable, and more transparent. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
Show Figures

Figure 1

32 pages, 13734 KB  
Article
Objective Programming Partitions and Rule-Based Spanning Tree for UAV Swarm Regional Coverage Path Planning
by Bangrong Ruan, Tian Jing, Meigen Huang, Xi Ning, Jiarui Wang, Boquan Zhang and Fengyao Zhi
Drones 2026, 10(1), 60; https://doi.org/10.3390/drones10010060 - 14 Jan 2026
Viewed by 205
Abstract
To address the problem of regional coverage path planning for unmanned aerial vehicle swarms (UAVs), this study proposes an algorithm based on objective programming partitions (OPP) and rule-based spanning tree coverage (RSTC). Aiming at the shortcomings of the traditional Divide Areas based on [...] Read more.
To address the problem of regional coverage path planning for unmanned aerial vehicle swarms (UAVs), this study proposes an algorithm based on objective programming partitions (OPP) and rule-based spanning tree coverage (RSTC). Aiming at the shortcomings of the traditional Divide Areas based on Robots Initial Positions combined with Spanning Tree Coverage (DARP-STC) algorithm in two core stages, that is, region partitions and spanning tree generation, the proposed algorithm conducts a targeted design and optimization, respectively. In the region partition stage, an objective programming and 0–1 integer programming model are adopted to realize the balanced allocation of UAVs’ task regions. In the spanning tree generation stage, a rule is designed to construct a spanning tree of coverage paths and is proven to achieve the minimum number of turns for the UAV under certain conditions. Both simulations and physical experiments demonstrate that the proposed algorithm can not only significantly reduce the number of turns of UAVs but also enhance the efficiency and coverage degree of tasks for UAV swarms. Full article
Show Figures

Figure 1

21 pages, 1506 KB  
Article
Mapping Morality in Marketing: An Exploratory Study of Moral and Emotional Language in Online Advertising
by Mauren S. Cardenas-Fontecha, Leonardo H. Talero-Sarmiento and Diego A. Vasquez-Caballero
J. Theor. Appl. Electron. Commer. Res. 2026, 21(1), 39; https://doi.org/10.3390/jtaer21010039 - 14 Jan 2026
Viewed by 275
Abstract
Understanding how moral and emotional language operates in paid social advertising is essential for evaluating persuasion and its ethical contours. We provide a descriptive map of Moral Foundations Theory (MFT) language in Meta ad copy (Facebook/Instagram) drawn from seven global beverage brands across [...] Read more.
Understanding how moral and emotional language operates in paid social advertising is essential for evaluating persuasion and its ethical contours. We provide a descriptive map of Moral Foundations Theory (MFT) language in Meta ad copy (Facebook/Instagram) drawn from seven global beverage brands across eight English-speaking markets. Using the moralstrength toolkit, we implement a two-channel pipeline that combines an unsupervised semantic estimator (SIMON) with supervised classifiers, enforces a strict cross-channel consensus rule, and adds a non-overriding purity diagnostic to reduce attribute-based false positives. The corpus comprises 758 text units, of which only 25 ads (3.3%) exhibit strong consensus, indicating that much of the copy is either non-moral or linguistically ambiguous. Within this high-consensus subset, the distribution of moral cues varies systematically by brand and category, with loyalty, fairness, and purity emerging as the most prominent frames. A valence pass (VADER) indicates that moralized copy tends toward negative valence, yet it may still yield a constructive overall tone when advertisers follow a crisis–resolution structure in which high-intensity moral cues set the stakes while surrounding copy positions the brand as the solution. We caution that text-only models undercapture multimodal signaling and that platform policies and algorithmic recombination shape which moral cues appear in copy. Overall, the study demonstrates both the promise and the limits of current text-based MFT estimators for advertising: they support transparent, reproducible mapping of moral rhetoric, but future progress requires multimodal, domain-sensitive pipelines, policy-aware sampling, and (where available) impression/spend weighting to contextualize descriptive labels. Full article
Show Figures

Figure 1

12 pages, 743 KB  
Article
KRAS Mutations in Circulating Tumor DNA for Lung Cancer Diagnosis: A Comprehensive Meta-Analysis
by Karolina Buszka, Łukasz Gąsiorowski, Claudia Dompe, Anna Szulta, Michał Nowicki, Agata Kolecka-Bednarczyk and Joanna Budna-Tukan
Cancers 2026, 18(2), 250; https://doi.org/10.3390/cancers18020250 - 14 Jan 2026
Viewed by 133
Abstract
Background: Mutations in the KRAS gene play a pivotal role in lung cancer development and progression and are becoming increasingly important in therapeutic decision-making. The detection of these mutations in circulating tumor DNA (ctDNA) has attracted attention as a minimally invasive diagnostic [...] Read more.
Background: Mutations in the KRAS gene play a pivotal role in lung cancer development and progression and are becoming increasingly important in therapeutic decision-making. The detection of these mutations in circulating tumor DNA (ctDNA) has attracted attention as a minimally invasive diagnostic approach. However, the accuracy reported in different studies varies widely. Methods: We conducted a systematic review and meta-analysis in accordance with the PRISMA-DTA guidelines. Eligible studies evaluated the detection of KRAS mutations in ctDNA in plasma or serum for lung cancer diagnosis and reported sufficient data to construct 2 × 2 contingency tables. Primary pooled estimates of sensitivity, specificity and likelihood ratios were calculated using aggregated 2 × 2 contingency tables. Additionally, a bivariate random-effects model was applied in a secondary analysis to investigate between-study heterogeneity. Results: Nine diagnostic study arms comprising 691 patients met the inclusion criteria. Across all datasets, there were 255 true positives, 19 false positives, 136 false negatives, and 281 true negatives. The pooled sensitivity was 65.2%, while the pooled specificity was 93.7%. The positive likelihood ratio was 10.35, and the negative likelihood ratio was 0.37, resulting in a diagnostic odds ratio of 28.0, which indicates strong rule-in capability. Sensitivity showed moderate heterogeneity across studies. In contrast, specificity demonstrated minimal heterogeneity. Conclusions: ctDNA-based detection of KRAS mutations demonstrates high specificity but moderate sensitivity for diagnosing lung cancer. These findings suggest that a KRAS liquid biopsy could be a valuable complementary diagnostic tool, particularly when a tissue biopsy is not possible or is inadequate, and it could support more personalized decision-making as analytical technologies continue to advance. Full article
(This article belongs to the Special Issue Liquid Biopsy for Lung Cancer Treatment (2nd Edition))
Show Figures

Figure 1

14 pages, 383 KB  
Article
From Mathematics to Art: A Petri Net Representation of the Fibonacci Sequence and Its Fractal Geometry
by David Mailland and Iwona Grobelna
Fractal Fract. 2026, 10(1), 53; https://doi.org/10.3390/fractalfract10010053 - 13 Jan 2026
Viewed by 247
Abstract
Mathematics, as Bertrand Russell noted, possesses both truth and beauty. In this work, we revisit the classical Fibonacci recurrence thanks to a minimal Petri net. Starting from a minimal layered construction that mirrors the second-order additive rule [...] Read more.
Mathematics, as Bertrand Russell noted, possesses both truth and beauty. In this work, we revisit the classical Fibonacci recurrence thanks to a minimal Petri net. Starting from a minimal layered construction that mirrors the second-order additive rule Fn=Fn1+Fn2, we show that the marking dynamics of the associated net generate a combinatorial triangle whose parity structure reveals a self-similar, Sierpiński-like pattern. To the best of our knowledge, this oblique fractal geometry has never been formally documented. We provide a formal definition of the underlying Petri net, analyse its computational properties, and explore the emergence of higher-order harmonics when token markings are considered modulo primes. The study highlights how a classical recurrence gives rise to previously unnoticed geometric regularities at the intersection of mathematics and art. Beyond its mathematical interest, the construction illustrates how minimal Petri net dynamics can be used as formal specification patterns for distributed, event-driven systems. Full article
Show Figures

Figure 1

23 pages, 1793 KB  
Article
Multisource POI-Matching Method Based on Deep Learning and Feature Fusion
by Yazhou Ding, Qi Tian, Yun Han, Cailin Li, Yue Wang and Baoyun Guo
Appl. Sci. 2026, 16(2), 796; https://doi.org/10.3390/app16020796 - 13 Jan 2026
Viewed by 155
Abstract
In the fields of geographic information science and location-based services, the fusion of multisource Point-of-Interest (POI) data is of remarkable importance but faces several challenges. Existing matching methods, including those based on single non-spatial attributes, single spatial geometric features, and traditional hybrid methods [...] Read more.
In the fields of geographic information science and location-based services, the fusion of multisource Point-of-Interest (POI) data is of remarkable importance but faces several challenges. Existing matching methods, including those based on single non-spatial attributes, single spatial geometric features, and traditional hybrid methods with fixed rules, suffer from limitations such as reliance on a single feature and inadequate consideration of spatial context. This study takes Dongcheng District, Beijing, as the research area and proposes a POI-matching method based on multi-feature value calculation and a deep neural network (DNN) model. The method comprehensively incorporates multidimensional features such as names, addresses, and spatial distances. Additionally, the approach also incorporates an improved multilevel name association strategy, an address similarity calculation using weighted edit distance, and a spatial distance model that accounts for spatial density and regional functional types. Furthermore, the method utilizes a deep learning model to automatically learn POI entity features and optimize the matching rules. Experimental results show that the precision, recall, and F1 value of the proposed method achieved 97.2%, 97.0%, and 0.971, respectively, notably outperforming traditional methods. Overall, this method provides an efficient and reliable solution for geospatial data integration and POI applications, and offers strong support for GIS optimization, smart city construction, and scientific urban/town planning. However, this method still has room for improvement in terms of data source quality and algorithm optimization. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

35 pages, 9791 KB  
Article
A Holistic Design Framework for Post-Disaster Housing Using Interlinked Modules for Diverse Architectural Applications
by Ali Mehdizade and Ahmad Walid Ayoobi
Sustainability 2026, 18(2), 778; https://doi.org/10.3390/su18020778 - 12 Jan 2026
Viewed by 410
Abstract
Providing effective post-disaster housing remains a globally complex challenge shaped by interrelated constraints, including environmental sustainability, socio-cultural compatibility, logistical capacity, and economic feasibility. Contemporary responses therefore require housing solutions that extend beyond rapid deployment to incorporate flexibility, adaptability, and long-term spatial transformation. In [...] Read more.
Providing effective post-disaster housing remains a globally complex challenge shaped by interrelated constraints, including environmental sustainability, socio-cultural compatibility, logistical capacity, and economic feasibility. Contemporary responses therefore require housing solutions that extend beyond rapid deployment to incorporate flexibility, adaptability, and long-term spatial transformation. In this context, this study advances a design-oriented, computational framework that positions parametric design at the core of post-disaster housing production within the broader digital transformation of the construction sector. The research proposes an adaptive parametric–modular housing system in which standardized architectural units are governed by a rule-based aggregation logic capable of generating context-responsive spatial configurations across multiple scales and typologies. The methodology integrates a qualitative synthesis of global post-disaster housing literature with a quantitative computational workflow developed in Grasshopper for Rhinoceros 3D (version 8). Algorithmic scripting defines a standardized spatial grid and parametrically regulates key building components structural systems, façade assemblies, and site-specific environmental parameters, enabling real-time configuration, customization, and optimization of housing units in response to diverse user needs and varying climatic, social, and economic conditions while maintaining constructability. The applicability of the framework is examined through a case study of the Düzce Permanent Housing context, where limitations of existing post-disaster stock, such as spatial rigidity, restricted growth capacity, and fragmented public-space integration, are contrasted with alternative settlement scenarios generated by the proposed system. The findings demonstrate that the framework supports multi-scalar and multi-typological reconstruction, extending beyond individual dwellings to include public, service, and open-space components. Overall, the study contributes a transferable computational methodology that integrates modular standardization with configurational diversity and user-driven adaptability, offering a sustainable pathway for transforming temporary post-disaster shelters into permanent, resilient, and socially integrated community assets. Full article
Show Figures

Figure 1

Back to TopTop