Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,758)

Search Parameters:
Keywords = coherent systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1192 KiB  
Article
Multi-Model Dialectical Evaluation of LLM Reasoning Chains: A Structured Framework with Dual Scoring Agents
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Ioan Susnea, Adina Cocu and Adrian Istrate
Informatics 2025, 12(3), 76; https://doi.org/10.3390/informatics12030076 (registering DOI) - 1 Aug 2025
Abstract
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed [...] Read more.
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed modular framework designed to evaluate reasoning through a structured three-stage process: opinion, counterargument, and synthesis. The framework enables transparent and comparative analysis of how different LLMs handle dialectical reasoning. (2) Methods: Each stage is executed by a single model, and final syntheses are scored via two independent LLM evaluators (LLaMA 3.1 and GPT-4o) based on a rubric with four dimensions: clarity, coherence, originality, and dialecticality. In parallel, a rule-based semantic analyzer detects rhetorical anomalies and ethical values. All outputs and metadata are stored in a Neo4j graph database for structured exploration. (3) Results: The system was applied to four open-weight models (Gemma 7B, Mistral 7B, Dolphin-Mistral, Zephyr 7B) across ten open-ended prompts on ethical, political, and technological topics. The results show consistent stylistic and semantic variation across models, with moderate inter-rater agreement. Semantic diagnostics revealed differences in value expression and rhetorical flaws not captured by rubric scores. (4) Originality: The framework is, to our knowledge, the first to integrate multi-stage reasoning, rubric-based and semantic evaluation, and graph-based storage into a single system. It enables replicable, interpretable, and multidimensional assessment of generative reasoning—supporting researchers, developers, and educators working with LLMs in high-stakes contexts. Full article
Show Figures

Figure 1

20 pages, 413 KiB  
Article
Spectral Graph Compression in Deploying Recommender Algorithms on Quantum Simulators
by Chenxi Liu, W. Bernard Lee and Anthony G. Constantinides
Computers 2025, 14(8), 310; https://doi.org/10.3390/computers14080310 (registering DOI) - 1 Aug 2025
Abstract
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this [...] Read more.
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this work introduces a graph compression pipeline that enables QAOA deployment under real quantum hardware constraints. This study investigates quantum-accelerated spectral graph compression for financial asset recommendations, addressing scalability and regulatory constraints in portfolio management. We propose a hybrid framework combining the Quantum Approximate Optimization Algorithm (QAOA) with spectral graph theory to solve the Max-Cut problem for investor clustering. Our methodology leverages quantum simulators (cuQuantum and Cirq-GPU) to evaluate performance against classical brute-force enumeration, with graph compression techniques enabling deployment on resource-constrained quantum hardware. The results underscore that efficient graph compression is crucial for successful implementation. The framework bridges theoretical quantum advantage with practical financial use cases, though hardware limitations (qubit counts, coherence times) necessitate hybrid quantum-classical implementations. These findings advance the deployment of quantum algorithms in mission-critical financial systems, particularly for high-dimensional investor profiling under regulatory constraints. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

13 pages, 564 KiB  
Article
Enhanced Semantic Retrieval with Structured Prompt and Dimensionality Reduction for Big Data
by Donghyeon Kim, Minki Park, Jungsun Lee, Inho Lee, Jeonghyeon Jin and Yunsick Sung
Mathematics 2025, 13(15), 2469; https://doi.org/10.3390/math13152469 - 31 Jul 2025
Abstract
The exponential increase in textual data generated across sectors such as healthcare, finance, and smart manufacturing has intensified the need for effective Big Data analytics. Large language models (LLMs) have become critical tools because of their advanced language processing capabilities. However, their static [...] Read more.
The exponential increase in textual data generated across sectors such as healthcare, finance, and smart manufacturing has intensified the need for effective Big Data analytics. Large language models (LLMs) have become critical tools because of their advanced language processing capabilities. However, their static nature limits their ability to incorporate real-time and domain-specific knowledge. Retrieval-augmented generation (RAG) addresses these limitations by enriching LLM outputs through external content retrieval. Nevertheless, traditional RAG systems remain inefficient, often exhibiting high retrieval latency, redundancy, and diminished response quality when scaled to large datasets. This paper proposes an innovative structured RAG framework specifically designed for large-scale Big Data analytics. The framework transforms unstructured partial prompts into structured semantically coherent partial prompts, leveraging element-specific embedding models and dimensionality reduction techniques, such as principal component analysis. To further improve the retrieval accuracy and computational efficiency, we introduce a multi-level filtering approach integrating semantic constraints and redundancy elimination. In the experiments, the proposed method was compared with structured-format RAG. After generating prompts utilizing two methods, silhouette scores were computed to assess the quality of embedding clusters. The proposed method outperformed the baseline by improving the clustering quality by 32.3%. These results demonstrate the effectiveness of the framework in enhancing LLMs for accurate, diverse, and efficient decision-making in complex Big Data environments. Full article
(This article belongs to the Special Issue Big Data Analysis, Computing and Applications)
Show Figures

Figure 1

15 pages, 2107 KiB  
Article
Optimal Coherence Length Control in Interferometric Fiber Optic Hydrophones via PRBS Modulation: Theory and Experiment
by Wujie Wang, Qihao Hu, Lina Ma, Fan Shang, Hongze Leng and Junqiang Song
Sensors 2025, 25(15), 4711; https://doi.org/10.3390/s25154711 - 30 Jul 2025
Abstract
Interferometric fiber optic hydrophones (IFOHs) are highly sensitive for underwater acoustic detection but face challenges owing to the trade-off between laser monochromaticity and coherence length. In this study, we propose a pseudo-random binary sequence (PRBS) phase modulation method for laser coherence length control, [...] Read more.
Interferometric fiber optic hydrophones (IFOHs) are highly sensitive for underwater acoustic detection but face challenges owing to the trade-off between laser monochromaticity and coherence length. In this study, we propose a pseudo-random binary sequence (PRBS) phase modulation method for laser coherence length control, establishing the first theoretical model that quantitatively links PRBS parameter to coherence length, elucidating the mechanism underlying its suppression of parasitic interference noise. Furthermore, our research findings demonstrate that while reducing the laser coherence length effectively mitigates parasitic interference noise in IFOHs, this reduction also leads to elevated background noise caused by diminished interference visibility. Consequently, the modulation of coherence length requires a balanced optimization approach that not only suppresses parasitic noise but also minimizes visibility-introduced background noise, thereby determining the system-specific optimal coherence length. Through theoretical modeling and experimental validation, we determined that for IFOH systems with a 500 ns delay, the optimal coherence lengths for link fibers of 3.3 km and 10 km are 0.93 m and 0.78 m, respectively. At the optimal coherence length, the background noise level in the 3.3 km system reaches −84.5 dB (re: rad/√Hz @1 kHz), representing an additional noise suppression of 4.5 dB beyond the original suppression. This study provides a comprehensive theoretical and experimental solution to the long-standing contradiction between high laser monochromaticity, stability and appropriate coherence length, establishing a coherence modulation noise suppression framework for hydrophones, gyroscopes, distributed acoustic sensing (DAS), and other fields. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

31 pages, 12776 KiB  
Article
Multi-Source Data Integration for Sustainable Management Zone Delineation in Precision Agriculture
by Dušan Jovanović, Miro Govedarica, Milan Gavrilović, Ranko Čabilovski and Tamme van der Wal
Sustainability 2025, 17(15), 6931; https://doi.org/10.3390/su17156931 (registering DOI) - 30 Jul 2025
Abstract
Accurate delineation of within-field management zones (MZs) is essential for implementing precision agriculture, particularly in spatially heterogeneous environments. This study evaluates the spatiotemporal consistency and practical value of MZs derived from three complementary data sources: electromagnetic conductivity (EM38-MK2), basic soil chemical properties (pH, [...] Read more.
Accurate delineation of within-field management zones (MZs) is essential for implementing precision agriculture, particularly in spatially heterogeneous environments. This study evaluates the spatiotemporal consistency and practical value of MZs derived from three complementary data sources: electromagnetic conductivity (EM38-MK2), basic soil chemical properties (pH, humus, P2O5, K2O, nitrogen), and vegetation/surface indices (NDVI, SAVI, LCI, BSI) derived from Sentinel-2 imagery. Using kriging, fuzzy k-means clustering, percentile-based classification, and Weighted Overlay Analysis (WOA), MZs were generated for a five-year period (2018–2022), with 2–8 zone classes. Stability and agreement were assessed using the Cohen Kappa, Jaccard, and Dice coefficients on systematic grid samples. Results showed that EM38-MK2 and humus-weighted BSP data produced the most consistent zones (Kappa > 0.90). Sentinel-2 indices demonstrated strong alignment with subsurface data (r > 0.85), offering a low-cost alternative in data-scarce settings. Optimal zoning was achieved with 3–4 classes, balancing spatial coherence and interpretability. These findings underscore the importance of multi-source data integration for robust and scalable MZ delineation and offer actionable guidelines for both data-rich and resource-limited farming systems. This approach promotes sustainable agriculture by improving input efficiency and allowing for targeted, site-specific field management. Full article
Show Figures

Figure 1

18 pages, 2414 KiB  
Article
Deep Deliberation to Enhance Analysis of Complex Governance Systems: Reflecting on the Great Barrier Reef Experience
by Karen Vella, Allan Dale, Margaret Gooch, Diletta Calibeo, Mark Limb, Rachel Eberhard, Hurriyet Babacan, Jennifer McHugh and Umberto Baresi
Sustainability 2025, 17(15), 6911; https://doi.org/10.3390/su17156911 - 30 Jul 2025
Viewed by 123
Abstract
Deliberative approaches to governance systems analysis and improvement are rare. Australia’s Great Barrier Reef (GBR) provides the context to describe an innovative approach that combines reflexive and interactive engagement processes to (a) develop and design a framework to assess the GBR’s complex governance [...] Read more.
Deliberative approaches to governance systems analysis and improvement are rare. Australia’s Great Barrier Reef (GBR) provides the context to describe an innovative approach that combines reflexive and interactive engagement processes to (a) develop and design a framework to assess the GBR’s complex governance system health; and (b) undertake a benchmark assessment of governance system health. We drew upon appreciative inquiry and used multiple lines of evidence, including an extensive literature review, governance system mapping, focus group discussions and personal interviews. Together, these approaches allowed us to effectively engage key actors in value judgements about twenty key characteristic attributes of the governance system. These attributes were organised into four clusters which enabled us to broadly describe and benchmark the system. These included the following: (i) system coherence; (ii) connectivity and capacity; (iii) knowledge application; (iv) operational aspects of governance. This process facilitated deliberative discussion and consensus-building around attribute health and priorities for transformative action. This was achieved through the inclusion of diverse perspectives from across the governance system, analysis of rich datasets, and the provision of guidance from the project’s Steering Committee and Technical Working Group. Our inclusive, collaborative and deliberative approach, its analytical depth, and the framework’s repeatability enable continuous monitoring and adaptive improvement of the GBR governance system and can be readily applied to complex governance systems elsewhere. Full article
Show Figures

Figure 1

20 pages, 1426 KiB  
Article
Hybrid CNN-NLP Model for Detecting LSB Steganography in Digital Images
by Karen Angulo, Danilo Gil, Andrés Yáñez and Helbert Espitia
Appl. Syst. Innov. 2025, 8(4), 107; https://doi.org/10.3390/asi8040107 - 30 Jul 2025
Viewed by 75
Abstract
This paper proposes a hybrid model that combines convolutional neural networks with natural language processing techniques for least significant bit-based steganography detection in grayscale digital images. The proposed approach identifies hidden messages by analyzing subtle alterations in the least significant bits and validates [...] Read more.
This paper proposes a hybrid model that combines convolutional neural networks with natural language processing techniques for least significant bit-based steganography detection in grayscale digital images. The proposed approach identifies hidden messages by analyzing subtle alterations in the least significant bits and validates the linguistic coherence of the extracted content using a semantic filter implemented with spaCy. The system is trained and evaluated on datasets ranging from 5000 to 12,500 images per class, consistently using an 80% training and 20% validation partition. As a result, the model achieves a maximum accuracy and precision of 99.96%, outperforming recognized architectures such as Xu-Net, Yedroudj-Net, and SRNet. Unlike traditional methods, the model reduces false positives by discarding statistically suspicious but semantically incoherent outputs, which is essential in forensic contexts. Full article
Show Figures

Figure 1

7 pages, 202 KiB  
Article
Morphological Features in Eyes with Prominent Corneal Endothelial Cell Loss Associated with Primary Angle-Closure Disease
by Yumi Kusumi, Masashi Yamamoto, Masaki Fukui and Masakazu Yamada
J. Clin. Med. 2025, 14(15), 5364; https://doi.org/10.3390/jcm14155364 - 29 Jul 2025
Viewed by 167
Abstract
Background: Patients with primary angle-closure disease (PACD), those with no history of acute angle-closure glaucoma or laser iridotomy, rarely present with prominent corneal endothelial cell density (CECD) loss. To identify factors associated with decreased CECD in PACD, anterior segment parameters were compared in [...] Read more.
Background: Patients with primary angle-closure disease (PACD), those with no history of acute angle-closure glaucoma or laser iridotomy, rarely present with prominent corneal endothelial cell density (CECD) loss. To identify factors associated with decreased CECD in PACD, anterior segment parameters were compared in patients with PACD and normal CECD and patients with PACD and decreased CECD, using anterior segment optical coherence tomography (AS-OCT). Patients and Methods: Ten patients with PACD and CECD of less than 1500/mm2 without a history of cataract surgery, acute angle-closure glaucoma, or prior laser glaucoma procedures were identified at the Kyorin Eye Center from January 2018 to July 2023. Patients with an obvious corneal guttata or apparent corneal edema were also excluded. Seventeen patients with PACD and normal CECD (normal CECD group) were used as the control. Simultaneous biometry of all anterior segment structures, including the cornea, anterior chamber, and iris, were assessed using a swept-source AS-OCT system. Results: Corneal curvature radius was significantly larger in the decreased CECD group compared with the corneal refractive power in the normal CECD group (p = 0.022, Mann–Whitney test). However, no significant differences were detected in other anterior segment morphology parameters. Multiple regression analysis with CECD as the dependent variable revealed that a large corneal curvature radius was a significant explanatory variable associated with corneal endothelial loss. Conclusions: Flattened corneal curvature may be a risk factor for corneal endothelial loss in patients with PACD. Full article
(This article belongs to the Special Issue Advances in Anterior Segment Surgery: Second Edition)
27 pages, 2565 KiB  
Review
The Role of ESG in Driving Sustainable Innovation in Water Sector: From Gaps to Governance
by Gabriel Minea, Elena Simina Lakatos, Roxana Maria Druta, Alina Moldovan, Lucian Marius Lupu and Lucian Ionel Cioca
Water 2025, 17(15), 2259; https://doi.org/10.3390/w17152259 - 29 Jul 2025
Viewed by 284
Abstract
The water sector is facing a convergence of systemic challenges generated by climate change, increasing demand, and increasingly stringent regulations, which threaten its operational and strategic sustainability. In this context, the article examines how ESG (environmental, social, governance) principles are integrated into the [...] Read more.
The water sector is facing a convergence of systemic challenges generated by climate change, increasing demand, and increasingly stringent regulations, which threaten its operational and strategic sustainability. In this context, the article examines how ESG (environmental, social, governance) principles are integrated into the governance, financing, and management of water resources, with a comparative focus on Romania and the European Union. It aims to assess the extent to which ESG practices contribute to the sustainable transformation of the water sector in the face of growing environmental and socio-economic challenges. The methodology is based on a systematic analysis of policy documents, regulatory frameworks, and ESG standards applicable to the water sector at both national (Romania) and EU levels. This study also investigates investment strategies and their alignment with the EU Taxonomy for Sustainable Activities, enabling a comparative perspective on implementation, gaps and strengths. Findings reveal that while ESG principles are increasingly recognized across Europe, their implementation remains uneven (particularly in Romania) due to unclear standards, limited funding mechanisms, and fragmented policy coordination. ESG integration shows clear potential to foster innovation, improve governance transparency, and support long-term resilience in the water sector. These results underline the need for coherent, integrated policies and stronger institutional coordination to ensure consistent ESG adoption across Member States. Policymakers should prioritize the development of clear guidelines and supportive funding instruments to accelerate sustainable outcomes. The originality of our study lies in its comparative approach, offering an in-depth analysis of ESG integration in the water sector across different governance contexts. It provides valuable insights for advancing policy coherence, investment alignment, and sustainable water resource management at both national and European levels. Full article
(This article belongs to the Section Water Resources Management, Policy and Governance)
Show Figures

Figure 1

27 pages, 5776 KiB  
Review
From “Information” to Configuration and Meaning: In Living Systems, the Structure Is the Function
by Paolo Renati and Pierre Madl
Int. J. Mol. Sci. 2025, 26(15), 7319; https://doi.org/10.3390/ijms26157319 - 29 Jul 2025
Viewed by 93
Abstract
In this position paper, we argue that the conventional understanding of ‘information’ (as generally conceived in science, in a digital fashion) is overly simplistic and not consistently applicable to living systems, which are open systems that cannot be reduced to any kind of [...] Read more.
In this position paper, we argue that the conventional understanding of ‘information’ (as generally conceived in science, in a digital fashion) is overly simplistic and not consistently applicable to living systems, which are open systems that cannot be reduced to any kind of ‘portion’ (building block) ascribed to the category of quantity. Instead, it is a matter of relationships and qualities in an indivisible analogical (and ontological) relationship between any presumed ‘software’ and ‘hardware’ (information/matter, psyche/soma). Furthermore, in biological systems, contrary to Shannon’s definition, which is well-suited to telecommunications and informatics, any kind of ‘information’ is the opposite of internal entropy, as it depends directly on order: it is associated with distinction and differentiation, rather than flattening and homogenisation. Moreover, the high degree of structural compartmentalisation of living matter prevents its energetics from being thermodynamically described by using a macroscopic, bulk state function. This requires the Second Principle of Thermodynamics to be redefined in order to make it applicable to living systems. For these reasons, any static, bit-related concept of ‘information’ is inadequate, as it fails to consider the system’s evolution, it being, in essence, the organized coupling to its own environment. From the perspective of quantum field theory (QFT), where many vacuum levels, symmetry breaking, dissipation, coherence and phase transitions can be described, a consistent picture emerges that portrays any living system as a relational process that exists as a flux of context-dependent meanings. This epistemological shift is also associated with a transition away from the ‘particle view’ (first quantisation) characteristic of quantum mechanics (QM) towards the ‘field view’ possible only in QFT (second quantisation). This crucial transition must take place in life sciences, particularly regarding the methodological approaches. Foremost because biological systems cannot be conceived as ‘objects’, but rather as non-confinable processes and relationships. Full article
Show Figures

Figure 1

17 pages, 1603 KiB  
Perspective
A Perspective on Quality Evaluation for AI-Generated Videos
by Zhichao Zhang, Wei Sun and Guangtao Zhai
Sensors 2025, 25(15), 4668; https://doi.org/10.3390/s25154668 - 28 Jul 2025
Viewed by 148
Abstract
Recent breakthroughs in AI-generated content (AIGC) have transformed video creation, empowering systems to translate text, images, or audio into visually compelling stories. Yet reliable evaluation of these machine-crafted videos remains elusive because quality is governed not only by spatial fidelity within individual frames [...] Read more.
Recent breakthroughs in AI-generated content (AIGC) have transformed video creation, empowering systems to translate text, images, or audio into visually compelling stories. Yet reliable evaluation of these machine-crafted videos remains elusive because quality is governed not only by spatial fidelity within individual frames but also by temporal coherence across frames and precise semantic alignment with the intended message. The foundational role of sensor technologies is critical, as they determine the physical plausibility of AIGC outputs. In this perspective, we argue that multimodal large language models (MLLMs) are poised to become the cornerstone of next-generation video quality assessment (VQA). By jointly encoding cues from multiple modalities such as vision, language, sound, and even depth, the MLLM can leverage its powerful language understanding capabilities to assess the quality of scene composition, motion dynamics, and narrative consistency, overcoming the fragmentation of hand-engineered metrics and the poor generalization ability of CNN-based methods. Furthermore, we provide a comprehensive analysis of current methodologies for assessing AIGC video quality, including the evolution of generation models, dataset design, quality dimensions, and evaluation frameworks. We argue that advances in sensor fusion enable MLLMs to combine low-level physical constraints with high-level semantic interpretations, further enhancing the accuracy of visual quality assessment. Full article
(This article belongs to the Special Issue Perspectives in Intelligent Sensors and Sensing Systems)
Show Figures

Figure 1

17 pages, 838 KiB  
Article
High-Fidelity Operations on Silicon Donor Qubits Using Dynamical Decoupling Gates
by Jing Cheng, Shihang Zhang, Banghong Guo, Huanwen Xie and Peihao Huang
Entropy 2025, 27(8), 805; https://doi.org/10.3390/e27080805 - 28 Jul 2025
Viewed by 99
Abstract
Dynamic decoupling (DD) can suppress decoherence caused by environmental noise, while in hybrid system it also hinders coherent manipulation between qubits. We realized the universal high-fidelity quantum gate set and the preparation of Bell states using dynamical decoupling gates (DD gates) in a [...] Read more.
Dynamic decoupling (DD) can suppress decoherence caused by environmental noise, while in hybrid system it also hinders coherent manipulation between qubits. We realized the universal high-fidelity quantum gate set and the preparation of Bell states using dynamical decoupling gates (DD gates) in a silicon-based phosphorus-doped (Si:P) system, effectively resolving the contradiction between decoherence protection and manipulation of qubits. The simulation results show that the fidelity of the universal quantum gate set are all above 99%, and the fidelity of Bell state preparation is over 96%. This work realized the compatibility between coherent protection and high-fidelity manipulation of quantum states, provided a reliable theoretical support for high-fidelity quantum computing. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

19 pages, 2871 KiB  
Article
Strategic Information Patterns in Advertising: A Computational Analysis of Industry-Specific Message Strategies Using the FCB Grid Framework
by Seung Chul Yoo
Information 2025, 16(8), 642; https://doi.org/10.3390/info16080642 - 28 Jul 2025
Viewed by 173
Abstract
This study presents a computational analysis of industry-specific advertising message strategies through the theoretical lens of the FCB (Foote, Cone & Belding) grid framework. Leveraging the AiSAC (AI Analysis System for Ad Creation) system developed by the Korea Broadcast Advertising Corporation (KOBACO), we [...] Read more.
This study presents a computational analysis of industry-specific advertising message strategies through the theoretical lens of the FCB (Foote, Cone & Belding) grid framework. Leveraging the AiSAC (AI Analysis System for Ad Creation) system developed by the Korea Broadcast Advertising Corporation (KOBACO), we analyzed 27,000 Korean advertisements across five major industries using advanced machine learning techniques. Through Latent Dirichlet Allocation topic modeling with a coherence score of 0.78, we identified five distinct message strategies: emotional appeal, product features, visual techniques, setting and objects, and entertainment and promotion. Our computational analysis revealed that each industry exhibits a unique “message strategy fingerprint” that significantly discriminates between categories, with discriminant analysis achieving 62.7% classification accuracy. Time-series analysis using recurrent neural networks demonstrated a significant evolution in strategy preferences, with emotional appeal increasing by 44.3% over the study period (2015–2024). By mapping these empirical findings onto the FCB grid, the present study validated that industry positioning within the grid’s quadrants aligns with theoretical expectations: high-involvement/think (IT and Telecom), high-involvement/feel (Public Institutions), low-involvement/think (Food and Household Goods), and low-involvement/feel (Services). This study contributes to media science by demonstrating how computational methods can empirically validate the established theoretical frameworks in advertising, providing a data-driven approach to understanding message strategy patterns across industries. Full article
(This article belongs to the Special Issue AI Tools for Business and Economics)
Show Figures

Figure 1

17 pages, 1327 KiB  
Article
MA-HRL: Multi-Agent Hierarchical Reinforcement Learning for Medical Diagnostic Dialogue Systems
by Xingchuang Liao, Yuchen Qin, Zhimin Fan, Xiaoming Yu, Jingbo Yang, Rongye Shi and Wenjun Wu
Electronics 2025, 14(15), 3001; https://doi.org/10.3390/electronics14153001 - 28 Jul 2025
Viewed by 204
Abstract
Task-oriented medical dialogue systems face two fundamental challenges: the explosion of state-action space caused by numerous diseases and symptoms and the sparsity of informative signals during interactive diagnosis. These issues significantly hinder the accuracy and efficiency of automated clinical reasoning. To address these [...] Read more.
Task-oriented medical dialogue systems face two fundamental challenges: the explosion of state-action space caused by numerous diseases and symptoms and the sparsity of informative signals during interactive diagnosis. These issues significantly hinder the accuracy and efficiency of automated clinical reasoning. To address these problems, we propose MA-HRL, a multi-agent hierarchical reinforcement learning framework that decomposes the diagnostic task into specialized agents. A high-level controller coordinates symptom inquiry via multiple worker agents, each targeting a specific disease group, while a two-tier disease classifier refines diagnostic decisions through hierarchical probability reasoning. To combat sparse rewards, we design an information entropy-based reward function that encourages agents to acquire maximally informative symptoms. Additionally, medical knowledge graphs are integrated to guide decision-making and improve dialogue coherence. Experiments on the SymCat-derived SD dataset demonstrate that MA-HRL achieves substantial improvements over state-of-the-art baselines, including +7.2% diagnosis accuracy, +0.91% symptom hit rate, and +15.94% symptom recognition rate. Ablation studies further verify the effectiveness of each module. This work highlights the potential of hierarchical, knowledge-aware multi-agent systems for interpretable and scalable medical diagnosis. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

22 pages, 5613 KiB  
Article
Generative Design-Driven Optimization for Effective Concrete Structural Systems
by Hossam Wefki, Mona Salah, Emad Elbeltagi and Majed Alinizzi
Buildings 2025, 15(15), 2646; https://doi.org/10.3390/buildings15152646 - 27 Jul 2025
Viewed by 337
Abstract
The process of designing reinforced concrete (RC) buildings has traditionally relied on manually evaluating a limited number of layout alternatives—a time-intensive process that may not always yield the most functionally efficient solution. This research introduces a parametric algorithmic model for the automated optimization [...] Read more.
The process of designing reinforced concrete (RC) buildings has traditionally relied on manually evaluating a limited number of layout alternatives—a time-intensive process that may not always yield the most functionally efficient solution. This research introduces a parametric algorithmic model for the automated optimization of RC buildings with solid slab systems. The model automates and optimizes the layout process, yielding measurable improvements in spatial efficiency while maintaining compliance with structural performance criteria. Unlike prior models that address structural or architectural parameters separately, the proposed framework integrates both domains through a unified generative design approach within a BIM environment, enabling automated evaluation of structurally viable and architecturally coherent slab layouts. Developed within the parametric visual programming environment in Dynamo for Revit, the model employs a generative design (GD) engine to explore and refine various design alternatives while adhering to structural constraints. By leveraging a BIM-based framework, this method enhances efficiency, optimizes resource utilization, and systematically balances structural and architectural requirements. The model was validated through three case studies, demonstrating cost reductions between 2.7% and 17%, with material savings of up to 13.38% in concrete and 20.87% in reinforcement, achieved within computational times ranging from 120 to 930 s. Despite the current development being limited to vertical load scenarios and being most suitable for regular slab-based configurations, the results demonstrated the model’s effectiveness in optimizing grid dimensions and reducing material quantities and costs, and highlighted its ability to streamline early-stage design processes. Full article
(This article belongs to the Special Issue Advancing Construction and Design Practices Using BIM)
Show Figures

Figure 1

Back to TopTop